Big-tech firms like IBM and Amazon lead generative AI boom with new tools


Related articles

Over the past year or so, generative artificial intelligence (AI) has gained immense traction within the global tech landscape. 

This is largely due to its innovative ability to alter how businesses and individuals approach problem-solving, creativity and decision-making. In fact, the versatility and efficiency of generative AI applications have led to their adoption across a wide range of industries, from healthcare to entertainment, which is evident from its rapidly expanding market size.

As of 2023, the global generative AI market was valued at $12.1 billion; however, this figure is set to rise to $119.7 billion by 2032, according to some projections.

Moreover, throughout 2022, a time when discussions surrounding this technology had not yet become mainstream, generative AI startups were able to raise $2.6 billion across 110 deals, a number which rose to nearly $50 billion in 2023, with prominent companies like OpenAI, Anthropic and Inflection AI securing several billion dollars each.

Another clear indicator of rising interest in this space is the growing number of searches related to the term “generative AI.” As can be seen from the chart below, following the release of OpenAI’s ChatGPT platform, interest in the technology spiked drastically — peaking during the month of June — particularly across countries like Singapore, China, Hong Kong, India and Israel.

Therefore, as the realm of AI-enabled tech continues to evolve, its application scope also expands, leading more companies to integrate these technologies into their operations. 

Ilan Rakhmanov, the founder and CEO of — an AI infrastructure provider for blockchain entities and Web3 projects — told Cointelegraph: “Most well-known brands can now afford to engage with generative AI and use it as a competitive edge. Also, we know what generative AI is capable of, but we still have a limited understanding of how it will evolve in the long-term future as more and more organizations and individuals leverage the technology and as a growing number of models train on its associated data sets.”

Mainstream entities exploring generative AI

At the turn of the new year, JPMorgan announced the release of DocLLM, a generative large language model (LLM) tailored for multimodal document understanding. It can reportedly analyze and process data associated with a range of enterprise documents — from forms and invoices to contracts and reports — often containing complex combinations of text and layout.

What sets DocLLM apart is its unique operational design, as it eschews the heavy reliance on image encoders common among existing multimodal language models. Instead, it focuses on bounding box information, integrating spatial layout structures more effectively. This is achieved through a novel disentangled spatial attention mechanism that refines the attention process in classical transformers.

Amazon has also stepped up its generative AI game by integrating a new tool to assist sellers on its platform. It generates accurate and engaging product descriptions, significantly easing the process of listing new products. It’s already popular among a majority of Amazon sellers.

Recent: US elections: Are CBDCs becoming ‘hyper-politicized’?

Mistral’s new sparse mixture of experts, or SMoE model, has gained immense traction in the developer community thanks to its speed, efficiency and extensive feature set. The model is open-source based, making it a go-to tool for devs creating unique language models with limited resources.

DeepMind, a subsidiary of Google, has also continued to be a significant player in the generative AI arena. Their advancements are evident in services like Google Brain and Google Translate. A notable recent contribution is the launch of Bard AI, a chatbot mirroring the capabilities of ChatGPT and allowing users to generate high-quality text and creative content.

Amazon Web Services (AWS) has made its mark with the introduction of Bedrock, a service that offers access to a variety of models from different AI companies. Bedrock is particularly notable for its comprehensive developer toolsets, which are instrumental in building and scaling generative AI applications.

Cloud-based software company Salesforce has integrated generative AI algorithms — collectively referred to as “Einstein GPT” — into its customer relation management platform, thereby significantly enhancing customer engagement and personalization.

Lastly, IBM released its Watson AI platform, which combines generative AI techniques with natural language processing (NLP) and machine learning (ML).

What does the future hold for generative AI?

Even though the future of generative AI seems to be poised for transformative growth, the sector is still navigating an unchartered terrain filled with both promise and challenges. According to Rakhmonov, the trajectory for generative AI-driven technologies still largely depends on the development of models that are not only reliable but also bring tangible value to their users, adding:

“The future of generative AI is somewhat uncertain as it evolves with wider adoption and more data. However, the ‘black-box’ nature of many AI models poses a significant challenge, as it could lead to problems in verifying the reliability of data and insights. Without clarity on how AI models produce outputs, public support for mainstream AI could wane.”

On a somewhat similar note, Scott Dykstra, chief technical officer and co-founder of Space and Time — an AI-enabled, Microsoft-backed decentralized data warehouse — told Cointelegraph that even though there is a lot of furor surrounding generative AI, the reality of the matter is much more nuanced.

Dykstra said that, as things stand, most Fortune 500 companies are navigating the generative AI space rather conservatively, which is demonstrated by the fact that most of them are happy to “simply add an AI chatbot to their website and call it a day.” He then went on to add:

“The problem is that enterprises have to operate at enterprise scale, and today, it’s really expensive to do so. While GPT-4 is in a clear lead in terms of quality of inference, it’s also quite expensive for the workloads of enterprise production-grade products. Across the board, we need to see token prices driven down, faster inference, and more tools around automating retrieval augmented generation.”

Problems impeding the growth of generative AI

As noted earlier, the evolution of generative AI is not without its hurdles. Dykstra believes a crucial technical challenge for generative models (such as LLMs) will be the speed of their respective token streams. “For a real LLM-based internet, what we need is sub-second inference speed, which is incredibly challenging,” he added.

On the development front, Dykstra believes that while progress has been made when it comes to AI-driven coding tools, a breakthrough in “no-code” solutions is yet to be seen. A no-code solution is a software development approach that requires few programming skills to build an application quickly.

“Numerous projects are utilizing GPT-4 for coding within large codebases, but the no-code design remains unsolved due to the complexity of contextualizing the entire codebase,” he said.

Recent: Crypto is rife with impersonation scams, don’t let them steal your money

Rakhmanov, on the other hand, has his focus set on the broader landscape influencing generative AI. He believes that regulatory actions from leading governments will be a key factor to watch as they stand to define acceptable AI practices.

Moreover, he believes that we may also be on the precipice of a global race for AI dominance, especially between major tech players and countries like the United States and China.

“Computing power and chip production are among the crucial conversations that will shape AI’s future,” he noted.

Thus, as we head to a future driven by technologies like AI, ML and NLP, it will be interesting to see how the global digital landscape will continue to evolve and grow over the coming decade.