
After half to one decade of technical developments that included the transformer architecture for AI systems together with several other computer science breakthroughs, the last 3–4 years have been crazily active in the development of specific applications resulting in (AI-based) software that we didn’t even dream about just 10–20 years ago. We are talking chiefly about AI models running powerful processing and analysis of texts, computer code, images, audio, videos, and even molecules, lately integrated into multimodal systems.
During 2024 many of these tools just shone, starting to shape the immediate future and impacting not just our daily lives but also businesses and the markets. We can now reliably use LLMs to summarize texts, look for pieces of information, or even solve simple to mid-complexity problems; we can boost software writing, scripting, data analysis and software utilization with LLMs that possess vast amounts of knowledge and behave like experts available 24/7. For those out there like me who struggle with drawing and designing, we can now get a lot done via AI-based image generators and editors. In science, tons of applications have come up, probably with molecular structure prediction and molecular design ranking the highest due to their relevance to modern biotechnology, medicine and pharma.
In the last 2 years we have witnessed how the AI ecosystem grows extremely fast, maturing in many ways some of which are unexpected; with branches growing far faster than others, others dying off quickly, others showing potential but stuck, others with such huge competition that the finances get close to collapse. With such very rapid advancements, markets shift quickly and businesses, especially R+D companies, must constantly adapt. In this article I will reflect about all these points, with a look into what 2025 and possibly the next immediate years might bring.
First of All, The Margins in AI Are Clearly Narrowing
Let’s open our discussion with a probably unexpected fact: the competitive margins in AI are narrowing quickly, and this is happening for all applications of AI. As I will exemplify below, it is clear that what was initially the master product of a revolutionary company or lab, was quickly imitated and often improved upon by others.
As Seen on Large Language Models
The clearest example is probably that of OpenAI and the revolutionary GPT-3 LLM it released a couple of years ago, and especially of its ChatGPT interface. They certainly seemed unique, although we did know that others were working on LLMs, and far into the future compared to the competitors. Even giants like Google seemed to be a decade or more behind. But in just one year, the giants got into business and other companies also became competitive, in some cases putting forward models that were even better than OpenAI’s best model at a given time. Throughout 2024 we saw a race with all giants and also other smaller developers fighting for the top and exchanging pole positions. You can right now as you read this judge the current situation by visiting the Chatbot arena‘s leaderboard and comparing with this screenshot taken on January 9, 2025:

Image generation
It is a bit harder to say who made the strike with image generation, because it all started to look a bit revolutionary already around a decade ago with the coupling of VQGAN and CLIP models:
How this "artificial dreaming" program works, and how you can create your own artwork with it
By 2020 and 2022 several models already much better existed:
All modern image generators in a single article and ready to run online
But then again with Dall-E-3, stable diffusion and other competitors, it all went to even higher levels as you’re probably well aware of, with photo-realistic tools that even work integrated into chatbot systems:
Like ChatGPT but With Web Search and Image Generation Capabilities, and Free on your Skype!
Again making the point of all this discussion, the fact is that tens of alternatives exist today.
AI for chemistry and biology
In addition to mainstream AI tools such as LLMs, which find applications everywhere, several niche applications emerged in the last 4-5 years, and particularly in chemistry and biology since 2021 after Deepmind revealed how its AlphaFold 2 system won CASP14 (see here some key blog posts).
When this happened, those academics who worked on creating systems for protein structure prediction were devastated. But they soon capitalized on it all, adopting elements and ideas that DeepMind put into AlphaFold 2 to create all kinds of new AI-based tools. This included much more than mere structure prediction, such as processing and analyzing molecular structures and models and, of special interest, designing whole new proteins, all powered by AI:
Breaking boundaries in protein design with a new AI model that understands interactions with any…
All this, to the extent that some historically hard problems such as designing new proteins that bind to others (which has enormous applications in medicine), got quite close to being solved:
The "AlphaFold moment" for protein binder design might be imminent
Then during 2024 Deepmind released its multimodal AlphaFold 3 system, and in a matter of months it already faced effective competitors that presumably perform as well as it, often with less restrictive licenses:
First Winners Emerge in the "Race" to Open-Source AlphaFold 3
Again, the case of AlphaFold 3 makes my point that the competitive margins have been reduced by a lot. And as this case shows, this all happened even for very niche applications, not just for AI systems of massive use by the general public such as LLMs or image generators.
By the way, all this subsection relates directly to the recent Nobel prize awarded to D. Baker from the Institute of Protein Design and to J. Jumper and D. Hassabis from DeepMind. I wrote an editorial that you can read out here:
The Nobel Prize in Chemistry: past, present, and future of AI in biology – Communications Biology
Soon after, new AI models started to be built that attempt to capture the whole central dogma of molecular biology, with the many applications and implications this will have once achieved.
Bottom line
The above examples show how no developer is safe from the fast pace at which AI evolves, which ensures fierce and quick competition. For end users this is good because it has driven down prices drastically and has forced companies to make their AI systems more open. But this same factor makes it harder for companies to stand out or recoup their massive investments. It is possible that without a major breakthrough such as what general-purpose AI (AGI) would entail, or some fully-working multimodal system that understands physics, chemistry, and biology, all end-to-end, businesses and investors may need to settle in and be patient, waiting long before they see significant returns and possibly some running off fuel as they go.
Second, Advances Move Blazingly Fast
I’m sure you agree with this. Just when a branch within the bug tree of AI R+D starts to slow down, another sprouts. For example, traditional LLMs like GPT-4 haven’t seen groundbreaking updates recently, but new models focused on smarter reasoning and decision-making, like OpenAI’s o1 which is supposed to have superior problem-solving capabilities, are pushing the frontiers of what’s possible in exciting ways. By the way, to know more about GPT-o1’s purported superior problem-solving capabilities, check out this excellent introduction by Abhinav Prasad Yasaswi:
OpenAI o1: Is This the Enigmatic Force That Will Reshape Every Knowledge Sector We Know?
Multimodality is also making a strike, in the case of OpenAI with models that can understand and generate images as well as understand and synthesize audio natively, that is as part of the core AI model itself and not by calling external sound recognition or synthesis systems. Abhinav Prasad Yasaswi also wrote a great post about his first hands-on experience with this system, that you can try with ChatGPT’s free version:
ChatGPT’s advanced voice mode is here! My first impressions…
Frontiers of AI in chemistry and biology: design and multimodality
In molecular modeling, there are right now two main innovation leads running in parallel and intertwined. One has to do with understanding not just protein molecules, as AlphaFold 2 did, but also all other kinds of molecules – from nucleic acids and the ions and small molecules that make up medicines and metabolites, to materials, etc. The other big branch at the lead has to do with not just predicting but actually designing molecules, mainly proteins or small molecules that can be of use in the clinic. This inherently benefits from developments in multimodality, as I discussed earlier:
"Sparks of Chemical Intuition"—and Gross Limitations!—in AlphaFold 3
As of early January 2025, multimodal systems like AlphaFold 3, such as Chai-1, Protenix Boltz-1, along with RoseTTAFold-AllAtoms from Nobel Prize D. Baker’s lab, are all boosting the revolution started by AlphaFold 2 by allowing computer to understand proteins and their complexes with nucleic acids, ligands, lipids and ions in radically new ways crucial for deep understanding of biological systems and for more rapid developments in pharma and biotech. Clearly, AI for Science is well at the fast growing point of the S curve of development.
Among the next branches of AI for science that are emerging and could exert great impact again, is that of multimodal systems that encompass not just protein structures but also genomic data, with a core revolving around large language models applied to biological sequences building on previous work that showed how AI can detect structure- and evolution-consistent patterns from biological sequences:
Protein Structure Prediction a Million Times Faster than AlphaFold 2 Using a Protein Language Model
Third, the Value of Prompt Engineering
Or, in other words, how context can help improve AI’s performance in certain situations, even quite dramatically as quantified already some time ago by DeepMind’s work on various LLMs:
New DeepMind Work Unveils Supreme Prompt Seeds for Language Models
While LLMs and other AI models suffer from severe limitations in problem-solving capabilities as well as from strong biases and authoritative tones even when wrong, these limitations are at least very clear by now. In fact, we nowadays know that these are among the biggest problems that affect most AI models, together with and often rooted in hallucinations, which can be sometimes but not always detected hence suppressed:
A New Method to Detect "Confabulations" Hallucinated by Large Language Models
As they develop new models, companies are now very careful in suppressing these problems as much as they can. This still faces problems; on one side because we humans always come up with new ways to trick the model and hack it, and on the other hand because sometimes the safety protocols introduced into an AI tool make it hide content that is actually innocent. Cassie Kozyrkov delved into one such very recent example:
Back to the issue of prompting, you know how important this is if you are a frequent user of LLMs or AI image generators. Change one word and the result can be totally different; and usually, when this happens this tends to arise from the model being close to hallucinating. In other words, if you ask the same question or request in different ways and consistently get the same output (i.e. answer, image, etc.) it is likely that the AI model is not hallucinating. Disclaimer, though, take this with a grain of salt!
The fact is that even the smartest AI models are limited by what they know, often struggling with tasks that require detailed knowledge about a very specific situation. By prompting the model, that is providing it with information together with the request, can help tremendously; however, many applications may require excessively large contexts in which the "lost-in-the-middle" problem arises, as discussed by Vladimir Blagojevic and Jérôme DIAZ in TDS Editors:
Enhancing RAG Pipelines in Haystack: Introducing DiversityRanker and LostInTheMiddleRanker
Why Retrieval-Augmented Generation Is Still Relevant in the Era of Long-Context Language Models
Fourth, and Still Amazed at This: AI is Transforming How Software is Used and Developed
That’s right, and you surely know if you do data science or write code: AI is changing how software is created and used, and how data is analyzed. The thing is that with the use of AI-based tools, developers can work faster and more efficiently, while non-programmers with minimal technical knowledge can quickly create scripts and small pieces of software.
And one doesn’t even need to use an advanced programming "copilot" like OpenAI’s, or VSC’s integrated AI assistant. Just opening ChatGPT or any similar system and simply asking it your questions in natural language, is enough. I reported already long ago how powerful even GPT-3 was for this:
Creating JavaScript functions and web apps with GPT-3’s free code writer
By impacting coding, AI impacted software utilization and data analysis
As implied in the introduction to this section, AI’s impact in on code writing meant in turn an impact on how we can interact with data and software. This happens because AI models, especially LLMs, can mediate to communicate a user’s intentions as expressed in natural language with a program’s internal code, so that the user can quickly achieve results without writing any code or instructions. I devised this already as soon as GPT-3 came out in programmatic form, over two years ago:
Control web apps via natural language by casting speech to commands with GPT-3
In our virtual reality-based software for molecular graphics, such protocol of LLM-based casting from natural language requests to internal code is especially handy, as users have their hands busy handling molecules. You can know more about this in a dedicated post, which actually shows how our program couples several AI-based tools for a complete experience:
Coupling Four AI Models to Deliver the Ultimate Experience in Immersive Visualization and Modeling
The Future of Chemistry Education is just around the corner with HandMol
And facilitating software utilization includes easier use of data analysis programs. One especially great example of this is R-Tutor, a web-based tool that helps users to learn and apply R programming to data analysis problems and to data visualization by converting their requests into R code:

One can then build its own programs capitalizing on this approach, achieving highly customizable and powerful systems like those I describe here:
Powerful Data Analysis and Plotting via Natural Language Requests by Giving LLMs Access to…
Everything we covered in this section is leading to two major trends: A boom in small, specialized software tools tailored to specific problems; and a resurgence in custom software development as companies realize how much they can accomplish with AI-powered workflows. As a result, it is likely that more businesses will have access to the tools they need for faster developments, analyses, and decision-making, driving innovation faster than ever.
Fifth: We Learned That Sometimes, Simpler is Better…
..and AI isn’t everything.
Indeed, it often happens that traditional tools are very well-established and reliable, therefore businesses prefer them because they know will work, even if they’re not cutting-edge or super-selling. As a consequence, startups in the market of AI tools are learning that in order to succeed, their products must deliver real, unique value and not just the novelty of automation. While it is unclear to what extent companies are reverting to older tools for some of their procedures, we have seen many ideas fail; for example, automatic image generation works very well for many things but can’t still beat human designers for some applications, especially those that require factual accuracy.
This also holds for AI in chemistry and biology, which I touched upon already above several times. It turns out that recent editions of CASP (the contest that saw AlphaFold 2 winner in 2020) have shown that, for example, modeling of nucleic acids is still poor by AI but works much better by using old-style homology modeling if a template is available. Likewise, ligand docking and virtual screening are expected to benefit tremendously from AI-powered systems, but detailed and extensive tests are still missing so one may be well-off with a poor-performing but at least well-known, traditional approach.
Sixth: AI Boosted a Resurgence of Companies Providing (Highly Specialized) Services
Traditionally, the tech world put forward products that customers could set up themselves, avoiding services like consulting, installation, support, etc. as much as possible – although often unavoidable, of course. But this is changing in the AI era; actually it has already changed, with many companies doing exactly that: offering services – from some as simple as training to others as complex as specific software/AI model development, tuning, and deployment.
See for example Gladia, a company that offers a simplified API wrapper for OpenAI’s Whisper AI system for speech recognition as I covered here:
Web Speech API: What Works, What Doesn’t, and How to Improve It by Linking It to a GPT Language…
Or think of Tamarind.Bio, a company whose main product is a series of services built around the idea of facilitating access to highlys pecialized AI models for biology:
The hook here is that most software, and especially so AI-based software, requires quite a lot of customization to fit into different environments and applications. That’s how these service providers help with training or fine-tuning existing models, integrating them into a company’s workflows, deploying models in ways that are simpler for others to use, etc. In fact, some companies are even building entirely new service-based models, which combine the cutting-edge AI tools with expertise to deliver results.
Looking Ahead
As we move into 2025 it has become very clear that we are already quite deeply into the AI revolution, with its effects already reshaping industries, jobs, and creativity. The progress we have made so far is extraordinary, and the opportunities ahead are even greater. From smarter tools to breakthroughs in science and technology, AI is helping us solve problems faster and work in entirely new ways, also allowing new businesses and jobs to arise.
While challenges like fairness, ethics, and access remain, our growing experience with AI now puts us in a better position to address them – contrary to the situation just 2–3 years ago when the climb into the S-curve of AI development started. The next few years are all about the new technology and also about finding the best ways to live and work with it.
Other Interesting Posts on AI
Provocatively, Microsoft Researchers Say They Found "Sparks of Artificial Intelligence" in GPT-4
Epic "Crossover" Between AlphaFold 3 and GPT-4o’s Knowledge of Protein Data Bank Entries
Scientists Go Serious About Large Language Models Mirroring Human Thinking
Are we and computers ready for routine interaction via speech?
www.lucianoabriata.com I write and photoshoot about everything that lies in my broad sphere of interests: nature, science, technology, programming, etc. Become a Medium member to access all its stories (affiliate links of the platform for which I get small revenues without cost to you) and subscribe to get my new stories by email. To consult about small jobs check my services page here. You can contact me here.