All Blogs

Turbocharging Your Business with (Gen)AI

Bharti Patel Bharti Patel
SVP, Product Engineering, Hitachi Vantara

April 17, 2024

Turbocharging Your Business with (Gen)AI

If you were to stop someone walking down the street and ask them how long artificial intelligence, or AI, has been a hot topic, they might say it’s something that’s emerged mostly in recent years. But AI has been around for a long time, with the term first being coined as long ago as 1955.

Generative AI however is a different beast, and one that's largely responsible for moving the topic of AI to the tip of everyone’s tongues – from consumers to enterprises alike. 2023 is largely considered to be generative AI’s ‘breakout’ year, and a breakout it was, with it now expected to result in a market volume of 207 billion dollars by 2030.

So, how did generative AI step up to the post quite so quickly, and how can businesses take control of the many opportunities – and avoid the potential pitfalls – it provides?

The gen AI (r)evolution

I don’t think many would disagree that the recent and exponential rise of ChatGPT has generated an explosion in conversation on the topic of gen AI across the globe. By providing a very simple and natural language interface, the power of AI was exposed via a popular, easy-to-use service that has excited and amazed a huge span of users – from children to their grandparents. As a result of this worldwide phenomenon, ChatGPT’s user base grew to 100 million users in less than two months – a feat which for example took streaming giant Netflix ten years to achieve, and which quickly cemented ChatGPT as the fastest growing app.

But it wasn’t just consumers who took notice. Enterprises too were increasingly turning their attention to the power of generative AI, and it wasn’t long before predictions were being made by the likes of McKinsey that trillions of dollars would be poured into the global economy to aid AI development.

It's important to note however that this shift didn’t happen overnight. The huge amount of compute that we have available today, as well as the powerful architecture of transformers – like the large language models, or LLMs, behind ChatGPT – which enable customer support; conversational dialog; translation and summarization; and content, code and test case creation, have made all this possible.

And it all comes back to data. Where previously, limited data meant that models were often overfed, the surplus of data we have available today has enabled the rapid development of new models. Thanks to data, we’re now living in an era where the democratization of gen AI means the constant unleashing of new possibilities.

Understanding the power of the paradigm shift for enterprise

Now we understand more about the generative AI journey, how can it be harnessed by enterprises? It’s a question asked frequently by many of us in the industry, and one that I work to solve every day in my role as SVP and Head of Engineering at Hitachi Vantara.

It's one thing for LLMs to be used to write a Valentine’s poem or the foundations of an essay for your homework – but to use gen AI in an enterprise setting, often for mission-critical applications, is something entirely different. Here, there’s no scope for hallucination (incorrect or misleading results generated by AI models): businesses utilizing gen AI requires 100% accuracy. Other requirements key to successful adoption include explainability, traceability, observability, and many more – and all this must be implemented in a cost-effective and socially responsible manner.

Data anywhere, gen AI everywhere

Let’s get into the specific considerations that enterprises face on their gen AI journey.

With data able to reside anywhere, the associated risks inevitably become greater, meaning most businesses require support in accessing their data in a secure manner. Many are increasingly turning to hybrid cloud, but they are also seeking an experience that enables the best of both data storage worlds: the ease of use of the cloud, and the low cost of on-premise.

Their data also needs to be able to be accessed proactively versus reactively. In a world of increased cyberattacks and data protection challenges, it’s not enough to wait for problems to arise – businesses must be able to constantly monitor their system to act on potential issues before they take place.

On this journey, enterprises may wonder whether to utilize one large, all-encompassing model, or to implement numerous smaller models. The best solution here is to finetune large models for a specific test so they can delegate to the smaller ones, rather than being forced to produce a hallucination. This enables more accurate, cost-effective results, but underneath all this, a high-performance data platform which can feed GPUs at speed is required to enable utilizations.

There are several options here, including out-of-the-box solutions on which to build your own gen AI applications, or the use of platforms to develop gen AI co-pilots or companions. There are many approaches on offer, but ensuring the architecture enables performance and resiliency at scale should always be a priority.

Enabling data access and intelligence for gen AI

While AI is technically nothing new, generative AI is a different ballgame. It's a much newer field, and one that we’re in the early stages of truly understanding. As such, there are many unknowns and the use of gen AI for mission-critical applications comes with a high element of risk; but on the other hand, businesses who do not jump into the gen AI journey face being left behind.

It's a fine line to walk, and the first step for enterprises should be to recognise that harnessing AI successfully for your business is a process that takes time, and a journey where collaboration is key to achieving the right result for your business.