Inscrivez-vous maintenant pour un meilleur devis personnalisé!

The State of AI in 2021: Language models, healthcare, ethics, and AI agnosticism

25 oct. 2021 Hi-network.com

Artificial Intelligence

  • 7 advanced ChatGPT prompt-writing tips you need to know
  • The 10 best ChatGPT plugins of 2023 (and how to make the most of them)
  • I've tested a lot of AI tools for work. These are my 5 favorite so far
  • Human or bot? This Turing test game puts your AI-spotting skills to the test

AI is expanding in two key areas of human activity and market investment -- health and language. Picking up the conversation from where we left off last week, we discussed AI applications and research in those areas with AI investors and authors of the State of AI 2021 report, Nathan Benaich and Ian Hogarth.

After releasing what probably was the most comprehensive report on the State of AI in 2020, Air Street Capital and RAAIS founder Nathan Benaich and AI angel investor and UCL IIPP visiting professor Ian Hogarth are back for more.

Last week, we discussed AI's underpinning: Machine learning in production, MLOps, and data-centric AI. This week we elaborate on specific areas of applications, investment, and growth.

AI in Healthcare

Last year, Benaich and Hogarth made the case that biology was experiencing its AI moment. This, they explained, reflects a huge inflection in published research that essentially tears out the old-school method of doing some kind of statistical analysis of biological experiments. The new method replaces statistical analysis with deep learning in most cases, and it yielded better results.

There's a lot of low-hanging fruit within the biology domain that could fit into this paradigm, Benaich noted. Last year was the time when this sort of problem solving approach of using machine learning for various things went on overdrive. One of the outputs of of this idea of using machine learning in biology is in the pharmaceutical industry.

For decades we've all known and all suffered the fact that drugs take way too long to be discovered, to be tested, and then ultimately to be approved. That is, unless there is some immense cataclysmic pressure to do otherwise, which is what we saw with COVID19 vaccines, Benaich went on to add. For many years incumbent pharma and new age pharma were at odds:

"Incumbent pharma is very much driven by having a hypothesis a priori, saying for example -- I think this gene is responsible for this disease, let's go prosecute it and figure out if that's true. Then there are the more software-driven folks who are on this new age pharma. They mostly look at large scale experiments, and they are asking many questions at the same time. In an unbiased way, they let the data draw the map of what they should focus on.

That's what progress in deep learning unlocked. So the new age pharma has largely said, well, the old pharma approach has been tried before. It sort of doesn't work. That's computational chemistry and physics. The only way to validate whether the new age pharma approach works, is if they can generate drug candidates that are actually in the clinic, and ultimately, get those drugs approved," said Benaich.

The duo's report highlights two "new age pharma" IPOs that prove the point. The State of AI in 2020 predicted that "one of the leading AI-first drug discovery startups either IPOs or is acquired for >$1B." Recursion Pharmaceuticals IPO'd in April 2021, and Exscientia filed to IPO in September 2021. Exscientia is one of the companies in Air Street Capital's portfolio, so Benaich has one more reason to celebrate.

The duo think the two IPOs are a pretty big deal because they both have assets generated through their machine learning-based approach that are actually in the clinic. Exscientia in particular is the only company and the first company that has generated and designed molecules using their machine learning system. The way it works is it takes a variety of different characteristics of a molecule and sets the task to the software to generate ideas of what a molecule could look like that fit those characteristics and meets the trade-off requirements, Benaich noted.

It's the first company that had three of those drugs in clinical trials in the last twelve months. Their IPO documentation makes for an interesting read, because they show that the number of chemical ideas that the company needs to prosecute before it finds one that works is an order of magnitude lower than what you see for traditional pharmaceutical companies, Benaich went on to add.

Benaich emphasized that even though this seems big to "technology folks like us", it's still very, very small in the overall context of the industry. These behemoth pharma companies are worth hundreds of billions of dollars, and together Recursion and Exscientia are worth at best 10 billion. Remembering what some other AI folks we spoke to earlier this year shared, we asked whether Benaich sees those practices being adopted in "old pharma" too.

"Totally. Even locally in London, AstraZeneca and GSK are beefing up their machine learning team quite a bit too. It's one of those examples of a mentality shift of how business is done. As younger generations who grew up with computers and writing code to solve their problems, as opposed to running more manual experiments in their spare time, end up in higher levels of those organizations, they just bring different problem-solving toolkits to the table," Benaich noted.

Large language models are a big deal

Change is inevitable. The question will ultimately be, can you actually shift the cost curve and spend less money on fewer experiments and have a higher hit rate. That will still take time, Benaich thinks. Hogarth noted that's not the only frontier in which machine learning is impacting pharma companies, pointing to the example of how machine learning is also used to parse research literature.

This touched upon our previous conversation with John Snow Labs CTO David Talby, as Natural Language Processing for the healthcare domain is John Snow Labs' core expertise. This, in turn, inevitably led the conversation to language models.

Benaich and Hogarth point to language models advances in the research section of their report; however, we were drawn to the commercialization side of things. We focused on OpenAI's GPT3, and how they went from publishing their models in their entirety to making them available commercially available through an API, partnering with Microsoft.

Takeaways from an action-packed 2021 for AI: Healthcare is just getting started with its AI moment, the bigger the language models, the bigger the complications, and there may now be a third pole for AGI. 

Image: Getty Images/iStockphoto

This gave birth to an ecosystem of sorts. We have seen, and toyed with, many startup offerings leveraging GPT3 to build consumer-facing products. Those startups offer copywriting services such as marketing copy, email and LinkedIn messages, and so on. We were not particularly impressed by them, and neither were Benaich and Hogarth.

However, for Benaich, the benefit of opening GPT3 as an API has generated is massive awareness over what language models could do if they get increasingly good. He thinks they're going to get increasingly good very quickly, especially as OpenAI starts to build offshoots of GPT-3, such as Codex.

Judging from Codex, which was "a pretty epic product which has been crying out for somebody to build it", vertical-focused models based on GPT-3 will probably be excellent, Benaich and Hogarth think. Investors are getting behind this too, as startups have raised close to 375 million in the last 12 months to bring LLM APIs and vertical software solutions to customers who cannot afford to directly compete with Big Tech.

The other way to think about it is that there is a certain quality of fashion with what developers coalesce around, Hogarth noted. Having attention-drawing applications such as Codex, or previously Primer's attempt to use AI to address Wikipedia's gender imbalance, shows what's possible. Then eventually what was previously state of the art becomes mainstream and the bar on the state of the art moves.

So-called large language models (LLMs) are beginning to make waves in ways that are not always anticipated. For example, they have given birth to a new programming paradigm, Software 3.0 or Prompt programming. The idea there is to prompt LLMs in a way that triggers it to produce results users are interested in.

Even beyond that, we see similar language models getting used in other domains, noted Benaich. He referred to research published in Science magazine, in which a language model was reimplemented to learn the viral spike protein, and then determine which versions of the spike protein and COVID-19 were more or less virulent. This, in turn, was used to forecast potential evolutionary paths the virus would have to take in order to produce more or less potent versions, which could be used to proactively stockpile vaccines.

Benaich believes that LLMs can internalize various basic forms of language, whether it's biology, chemistry, or human language. Hogarth chimed in, saying that this is in a way unsurprising, as language is so malleable and extensible, so we're only going to see unusual applications of language models grow.

AI Agnosticism

Of course, not everyone agrees with this view, and not everyone thinks everything about LLMs is wonderful. On the technical side of things, many people question the approach LLMs are taking. This is something we have repeatedly referred to, and a long-standing debate within the AI community really.

People in the AI community like Gary Marcus, whom we hosted in a conversation about the future of AI last year, or Walid Saba, whose aptly named contribution "Machine Learning Won't Solve Natural Language Understanding" was runner up for the Gradient Prize Winners this year have been vocal critics of the LLM approach.

In what many people would claim resembles a religious debate in some ways, Hogarth is a fan of what he calls a more agnostic approach:

"We have what you'd call the atheist view, which is -- these models aren't going to get us much further. They don't really understand anything. There's the true believer view, which is -- all we need to do is scale these up and they'll be completely sentient. There's a view in the middle, a slightly more agnostic view that says -- we've got a few more big things to discover, but these are part of it".

Hogarth believes that the "agnostic view" has the right amount of deference for how much LLMs are able to do, but also captures the fact that they lack causal reasoning and other major blocks to be able to scale. Speaking of scale, the fact that LLMs are humongous also has humongous implications on the resources needed to train them, as well as their environmental footprint.

Interestingly, after being in the eye of the storm on AI ethics with Timnit Gebru's firing last year, Google made the 2021 State of AI Report for work on a related topic. Even though more people tend to focus on the bias aspect of Gebru's work, for us the aspect of the environmental footprint of LLMs that this work touched upon is at least equally important.

Major factors that drive the carbon emissions during model training are the choice of neural network (esp. dense or sparse), the geographic location of a data center, and the processors. Optimizing these reduces emissions.

Researchers from Google and Berkeley evaluated the energy and CO2 budget of five popular LLMs and proposed formulas for researchers to measure and report on these costs when publishing their work. Major factors that drive CO2 emissions during model training are the choice of neural network (esp. dense or sparse), the geographic location of a data center, and the processors.

Commenting on the high-profile Gebru incident, Hogarth commended Gebru for her contribution. At the same time, he noted that if you're going to start to put these LLMs into production through large search engines, there is more tension that arises when you start to question the bias within those systems or environmental concerns.

Ultimately, that creates a challenge for the corporate parent to navigate to put these put this research into production. For Hogarth, the most interesting response to that has been the rise of alternative governance structures. More specifically, he referred to EleutherAI, a collective of independent AI researchers who open-sourced their 6 billion parameter GPT-j LLM.

"When EleutherAI launched, they explicitly said that they were trying to provide access to large pre-trained models, which would enable large swathes of research that would not be possible while such technologies are locked way behind corporate walls, because for-profit entities have explicit incentives to downplay risks and discourage security probing", Hogarth mentioned.

EleutherAI means is an open-source LLM alternative now. Interestingly, there also is what Benaich and Hogarth called a "3rd pole" in AGI research next to OpenAI and Google / DeepMind as well: Anthropic. The common thread Hogarth, who is an investor in Anthropic, found is governance. Hogarth is bullish on Anthropic's prospects, mainly due to the caliber of the early team:

"The people who left open AI to create Anthropic have tried to pivot the governance structure by creating a public benefit corporation. They won't hand control over the company to people who are not the company or its investors. I don't know how much progress is made towards that so far, but it's quite a fundamental governance shift, and I think that that allows for a new class of actors to come together and work on something", Hogarth said.

As usual. both the conversation with Benaich and Hogarth as well as writing up on this come short of doing justice to the burgeoning domain that is AI today. Until we revisit it, even browsing through the 2021 State of AI Report should provide lots of material to think about and explore.

Featured

We're not ready for the impact of generative AI on electionsThis is the$300 Android phone to beat in 2023 - and it even has a stylus5 things I learned while building my smart homeThe best laptops under$1,000: MacBook, Surface Pro, HP models compared
  • We're not ready for the impact of generative AI on elections
  • This is the$300 Android phone to beat in 2023 - and it even has a stylus
  • 5 things I learned while building my smart home
  • The best laptops under$1,000: MacBook, Surface Pro, HP models compared

tag-icon Tags chauds: Intelligence artificielle Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.