Showing posts with label What is Generative AI. Show all posts
Showing posts with label What is Generative AI. Show all posts

Thursday, 7 November 2024

What is Generative AI, How It work and Uses of Generative AI?

 What is Generative AI, How It work and Uses of Generative AI?

Generative AI is a relatively new form of AI that, unlike its predecessors, can create new content from its training data. Its extraordinary ability to produce human-like writing, images, audio, and video has captured the world’s imagination since the first consumer-grade generative AI chatbot was launched in the fall of 2022. A June 2023 report from McKinsey & Company estimated that generative AI has the potential to add between $6.1 and $7.9 trillion per year to the global economy by increasing labor productivity. To put that into context, the same research puts the annual economic potential of productivity gains from all AI technologies at between $17.1 and $25.6 trillion. So while generative AI has the spotlight in 2023, it’s still only a portion of AI’s full potential. 


But every action has an equal and opposite reaction. So along with its remarkable productivity prospects, generative AI brings new potential business risks, such as inaccuracy, privacy violations, and intellectual property exposure, as well as the ability to cause large-scale economic and social disruption. For example, the productivity benefits of generative AI are unlikely to materialize without substantial worker retraining efforts, and even then, they will undoubtedly displace many people from their current jobs. Consequently, government policymakers around the world, and even some tech industry executives, are advocating for the rapid adoption of regulations regarding AI.

This article explores in depth the promise and peril of generative AI: how it works; its most immediate applications, use cases, and examples; its limitations; its potential business benefits and risks; best practices for its use; and an outlook for its future.

What is generative AI?

Generative artificial intelligence (GAI) is the name given to a subset of machine learning technologies that have recently developed the ability to rapidly create content in response to text prompts, which can be short and simple or very long and complex. Different generative AI tools can produce new audio, image, and video content, but it is text-oriented conversational AI that has sparked the imagination. In effect, people can converse with text-trained generative AI models in the same way they do with humans.

Generative AI took the world by storm in the months following the launch of ChatGPT, a chatbot based on OpenAI’s GPT-3.5 neural network model, on November 30, 2022. GPT stands for “generative pretrained transformer,” words that primarily describe the underlying architecture of the model’s neural network.

There are many earlier instances of conversational chatbots, starting with ELIZA from the Massachusetts Institute of Technology (MIT) in the mid-1960s. But most earlier chatbots, including ELIZA, were entirely or mostly rule-based, so they lacked contextual understanding. Their responses were limited to a set of predefined rules and templates. In contrast, the generative AI models that are emerging now have no such predefined rules or templates. Metaphorically speaking, they are blank, primitive brains (neural networks) that are exposed to the world through training with real-world information. They then independently develop intelligence—a representative model of how that world works—which they use to generate novel content in response to prompts. Even AI experts don’t know exactly how they do this, since the algorithms develop and adjust themselves as the system is trained.

Companies large and small should be excited about the potential of generative AI to bring the benefits of technological automation to knowledge work, which has so far largely resisted automation. Generative AI tools change the calculus of knowledge work automation; their ability to produce human-like writing, images, audio, or video in response to plain English prompts means they can collaborate with human partners to generate content that represents practical work.

"Over the next few years, many companies are going to be training their own large, specialized language models," said Larry Ellison, Oracle's president and chief technology officer, during the company's earnings call in June 2023.

Generative AI vs. AI

Artificial intelligence is a vast area of ​​computer science, of which generative AI is only a small part, at least at present. Naturally, generative AI shares many characteristics with traditional AI. But there are also some important differences.

  • Common attributes: Both rely on large amounts of data for training and decision making (although training data for generative AI can be orders of magnitude larger). Both learn patterns from the data and use that “knowledge” to make predictions and adapt their own behavior. Both can optionally be improved over time by adjusting their parameters based on feedback or new information.
  • Differences: Traditional AI systems are typically designed to perform a specific task better or at a lower cost than a human, such as detecting credit card fraud, guiding driving, or, likely soon, driving the car. Generative AI is broader; it creates new, original content that resembles, but is not found in, its training data. Additionally, traditional AI systems, such as machine learning systems , are primarily trained on data specific to their intended function, whereas generative AI models are trained on large, diverse data sets (and then sometimes fine-tuned on much smaller volumes of data related to a specific function). Finally, traditional AI is almost always trained on labeled/categorized data using supervised learning techniques, whereas generative AI must always be trained, at least initially, using unsupervised learning (where the data is unlabeled and the AI ​​software is not given explicit guidance).

Another difference worth noting is that training foundational models for generative AI is “obscenely expensive,” to quote one AI researcher. Imagine, $100 million just for the hardware needed to get started, plus the equivalent costs of cloud services, since that’s where most AI development is done. Then there’s the cost of the huge volumes of data required.

Key findings

  • Generative AI became a viral sensation in November 2022 and is expected to soon add trillions of dollars to the global economy every year.
  • AI is a form of machine learning based on neural networks trained on vast data sets that can create novel content in text, images, video or audio in response to natural language prompts from users.
  • Market researchers predict that technology will drive economic growth by dramatically accelerating productivity growth for knowledge workers, whose jobs have so far resisted automation.
  • Generative AI carries risks and limitations that companies must mitigate, such as “hallucinating” incorrect or false information and inadvertent copyright infringement.
  • It is also expected to cause significant changes in the nature of work, including possible job losses and role restructuring.

Generative Artificial Intelligence in detail

For companies large and small, the seemingly magical promise of generative AI is that it can bring the benefits of technological automation to knowledge work. Or, as a McKinsey report put it, “activities involving decision-making and collaboration, which previously had the least potential for automation.”

Historically, technology has been most effective at automating routine or repetitive tasks, for which decisions were already known or could be determined with a high level of confidence, based on specific and very clear rules. Think of manufacturing, with its precise repetition on the assembly line, or accounting, with its regulated principles set by industry associations. But generative AI has the potential to do much more sophisticated cognitive work. To suggest an admittedly extreme example, generative AI could help shape an organization’s strategies, responding to prompts asking for ideas and alternative scenarios from business managers in the midst of industry change.

In its report, McKinsey assessed 63 use cases across 16 business functions, concluding that 75% of the $1 trillion in potential value that could be achieved with generative AI will come from a subset of use cases in just four of those functions: customer operations, marketing and sales, software engineering, and research and development. Revenue growth prospects across industries were more evenly spread, though there were standouts: High-tech topped the list for potential boost, in terms of percentage of industry revenue, followed by banking, pharmaceuticals and medical, education, telecommunications, and healthcare.

Separately, a Gartner analysis agrees with McKinsey predictions — for example, that more than 30% of new drugs and materials will be discovered using generative AI techniques by 2025, up from zero today, and that 30% of large organizations’ outbound marketing messages will be synthetically generated by 2025, up from 2% in 2022. And in an online survey, Gartner found that customer experience and retention was the top response (38%) of 2,500 executives asked where their organizations were investing in generative AI.

What makes it possible for all this to happen so quickly is that unlike traditional AI, which has been quietly automating and adding value to business processes for decades, generative AI came into the world’s consciousness thanks to ChatGPT’s human-like conversational skill. That has also attracted people and shed light on generative AI technology, which focuses on other modalities. It seems like everyone is experimenting with writing text or creating music, images, and videos using one or more of the various models that specialize in each area. So with many organizations already experimenting with generative AI, its impact on business and society is likely to be colossal — and it will happen astonishingly quickly.

The obvious downside is that knowledge work will change. Individual roles will change, sometimes significantly, so workers will need to learn new skills. Some jobs will be lost. However, historically, big technological changes, such as generative AI, have always added more (and higher-value) jobs to the economy than they eliminate. But this is of little comfort to those whose jobs are eliminated.

How does it work?

There are two answers to the question of how generative AI models work. Empirically, we know how they work in detail because humans designed their various neural network implementations to do exactly what they do, iterating those designs over decades to get them ever better. AI developers know exactly how neurons are wired; they designed the training process for each model. In practice, however, no one knows exactly how generative AI models do what they do — that’s an embarrassing truth.

“We don’t know how they perform the actual creative task because what’s going on inside the layers of the neural network is too complex for us to figure out, at least today,” said Dean Thompson, former CTO of multiple AI startups that have been acquired over the years by companies like LinkedIn and Yelp, where he still works as a senior software engineer on large language models (LLMs). The ability of generative AI to produce new and original content seems to be an emergent property of what is known — namely, its structure and training. So while there’s a lot to explain in what we know, what a model like GPT-3.5 is actually doing internally — what it’s thinking, so to speak — has yet to be discovered. Some AI researchers are confident that this will be known in the next 5 to 10 years; others aren’t sure it will ever be fully understood.

Here's an overview of what we know about how generative AI works:

  • Let’s start with the brain. A good place to begin understanding generative AI models is with the human brain, according to Jeff Hawkins in his 2004 book, “On Intelligence.” Hawkins, a computer scientist and neuroscientist, presented his work at a 2005 session at PC Forum, which was an annual conference of leading technology executives led by tech investor Esther Dyson. Hawkins hypothesized that at the neural level, the brain works by continually predicting what is going to happen and then learning from the differences between its predictions and subsequent reality. To improve its predictive ability, the brain builds an internal representation of the world. In his theory, human intelligence emerges from that process. Whether influenced by Hawkins or not, generative AI works exactly this way. And, remarkably, it acts as if it were intelligent.

  • Building an artificial neural network. All generative AI models start with an artificial neural network coded in software. Thompson says a good visual metaphor for a neural network is to imagine the familiar spreadsheet, but in three dimensions, because the artificial neurons are stacked in layers, similar to how real neurons are stacked in the brain. AI researchers even call each neuron a “cell,” Thompson notes, and each cell contains a formula that relates it to other cells in the network, mimicking the way connections between neurons in the brain have different strengths.

    Each layer can have tens, hundreds, or thousands of artificial neurons, but the number of neurons is not what AI researchers focus on. Instead, they measure models by the number of connections between neurons. The strengths of these connections vary based on the coefficients in their cells’ equations, which are more generally known as “weights” or “parameters.” These coefficients defining the connections are what you’re referring to when you read, for example, that the GPT-3 model has 175 billion parameters. The latest version, GPT-4, is said to have trillions of parameters, though this is unconfirmed. There are a handful of neural network architectures with different characteristics that lend themselves to producing content in a particular modality—the transformer architecture seems to be best for large language models, for example.

  • Educating the newborn neural network model. Large language models are given huge volumes of text to process and tasked with making simple predictions, such as the next word in a sequence or the correct order of a set of sentences. In practice, however, neural network models work on units called tokens, not words.

    “A common word might have its own token, unusual words would almost certainly be made up of multiple tokens, and some tokens might simply be a space followed by ‘th’ because that three-character sequence is so common,” Thompson said. To make each prediction, the model inputs a token into the bottom layer of a particular stack of artificial neurons; that layer processes it and passes its output to the next layer, which processes it and passes it its output, and so on until the final output emerges at the top of the stack. Stack sizes can vary significantly, but are typically on the order of tens of layers, not thousands or millions.

    In the early stages of training, the model's predictions aren't very good. But every time the model predicts a token, it checks to see if it's correct with respect to the training data. Whether it's correct or incorrect, a "backpropagation" algorithm adjusts the parameters — that is, the coefficients of the formulas — in each cell of the stack that made that prediction. The goal of the adjustments is to make the correct prediction more likely.

    "It does this even for correct answers, because that correct prediction may have been, say, 30 percent accurate, but that 30 percent was the highest among all the other possible answers," Thompson said. "So backpropagation is looking to turn that 30 percent into 30.001 percent or something like that."

    After the model has repeated this process for trillions of text tokens, it becomes very good at predicting the next token or word. After initial training, generative AI models can be fine-tuned using a supervised learning technique, such as reinforcement learning from human feedback (RLHF). In RLHF, the model’s output is presented to human reviewers who give a positive or negative binary evaluation — “thumbs up” or “thumbs down” — which is fed back to the model. RLHF was used to fine-tune OpenAI’s GPT 3.5 model and help create the ChatGPT chatbot that went viral.

  • But how did the model answer my question? It’s a mystery. Here’s how Thompson explains the current state of understanding: “There’s a big ‘we just don’t know’ in the middle of my explanation. What we know is that it takes your entire question as a sequence of tokens and in the first layer it processes them all simultaneously. And we know that it then processes the outputs of that first layer in the next layer, and so on up the stack. And then we know that it uses that top layer to predict — that is, produce a first token — and that first token is represented as data throughout that system to produce the next token, and so on.”

    The next logical question is, what did he think about and how, in all that processing? What did all those layers do? And the most robust answer is that we don't know. We don't... know. You can study it. You can observe it. But it's too complex for us to analyze. It's like the F-MRI [functional magnetic resonance imaging] in the human brain. It's the most rudimentary sketch of what the model actually did. We don't know.

    Though controversial, a group of more than a dozen researchers who were given early access to GPT-4 in the fall of 2022 concluded that the intelligence with which the model responds to complex challenges posed to it and the broad range of insights it exhibits indicate that GPT-4 has achieved a form of general intelligence. In other words, it has built an internal model of how the world works, much like a human brain might, and it uses that model to reason through questions posed to it. One of the researchers told the “This American Life” podcast that he had a “holy crap” moment. when he asked GPT-4 to give him "a recipe for chocolate chip cookies, but written in the style of a very depressed person," the model replied: "Ingredients: 1 cup softened butter, if you can even find the energy to soften it. 1 teaspoon vanilla extract, the artificial, fake flavor of happiness. 1 cup semi-sweet chocolate chips, little joys that will eventually melt away completely."

Why is it important?

A useful way to understand the importance of generative AI is to think of it as a calculator for creative, open-ended content. Much like a calculator automates routine, mundane math, freeing up a person to focus on higher-level tasks, generative AI has the potential to automate the more routine, mundane tasks that make up much of knowledge work, allowing people to focus on the higher-level parts of the work.

Consider the challenges marketers face in gaining actionable insights from the unstructured, inconsistent, and disconnected data they often encounter. Traditionally, they would need to consolidate that data as a first step, requiring a considerable amount of custom software engineering to give a common structure to disparate data sources such as social media, news, and customer feedback.

“But with LLMs, you can just plug information from different sources right into the application and then ask for key insights, or what feedback to prioritize, or ask for sentiment analysis, and it will just work,” said Basim Baig, a senior engineering manager specializing in AI and security at Duolingo. “The power of LLM here is that it allows you to skip that huge, expensive engineering step.”

Thinking further, Thompson suggests that product marketers could use LLMs to tag free text for analysis. For example, imagine you have a large database of social media mentions of your product. You could write software that applies an LLM and other technologies to:

  • Extract the main themes from each social media post.
  • Group idiosyncratic themes that emerge from individual posts into recurring themes.
  • Identify which publications support each recurring theme.

You could then apply the results to:

  • Study the most frequently recurring themes by clicking on examples.
  • Track the rise and fall of recurring themes.
  • Ask an LLM to delve deeper into a recurring theme for frequent mentions of product features.

Generative AI models

Generative AI represents a broad category of applications based on a growing range of neural network variations. While all generative AI fits the general description in the How does generative AI work? section, implementation techniques vary to support different media, such as images versus text, and to incorporate research and industry advances as they emerge.

Neural network models use repetitive patterns of artificial neurons and their interconnections. A neural network design, for any application including generative AI, often repeats the same pattern of neurons hundreds or thousands of times, usually reusing the same parameters. This is an essential part of what is called “neural network architecture.” The discovery of new architectures has been an important part of AI innovation since the 1980s, often driven by the goal of supporting a new medium. But then, once a new architecture has been invented, further progress is often made by using it in unexpected ways. Further innovation comes from combining elements of different architectures.

Two of the oldest and most common architectures are:

  • Recurrent neural networks (RNNs) emerged in the mid-1980s and are still in use. RNNs demonstrated how AI could learn and be used to automate tasks that rely on sequential data—that is, information whose sequence contains meaning, such as language, stock market behavior, and web clickstreams. RNNs are the basis for many audio AI models, such as music-generating applications—think of the sequential nature of music and time-based dependencies. But they are also effective in natural language processing (NLP). RNNs are also used in traditional AI functions, such as speech recognition, handwriting analysis, financial and weather forecasting, and to predict variations in energy demand, among many other applications.
  • Convolutional neural networks (CNNs) emerged about 10 years later. They focus on grid-shaped data and are therefore excellent for spatial data representations and can generate images. Popular text-to-image generative AI applications, such as Midjourney and DALL-E, use CNNs to generate the final image.

Although RNNs are still frequently used, successive efforts to improve RNNs led to a breakthrough:

  • Transformer models have evolved into a much more flexible and powerful way of representing sequences than RNNs. They have several features that allow them to process sequential data, such as text, in huge quantities and in parallel without losing their understanding of the sequences. That parallel processing of sequential data is one of the key features that allows ChatGPT to respond so quickly and well to natural language prompts.

Research, private industry, and open source efforts have created impactful models that innovate at higher levels of neural network architecture and application. For example, there have been crucial innovations in the training process, in how training feedback is incorporated to improve the model, and in how multiple models can be combined in generative AI applications. Here is a summary of some of the most important innovations in generative AI models:

  • Variational autoencoders (VAEs) use innovations in neural network architecture and training processes and are often incorporated into image-generating applications. They consist of encoding and decoding networks, each of which may use a different underlying architecture, such as an RNN, CNN, or transformer. The encoder learns the important features and characteristics of an image, compresses that information, and stores it as a representation in memory. The decoder then uses that compressed information to attempt to recreate the original. Ultimately, VAEs learn to generate new images similar to their training data.
  • Generative adversarial networks (GANs) are used in a variety of modalities, but seem to have a special affinity for video and other image-related applications. What sets GANs apart from other models is that they consist of two neural networks competing with each other while training. In the case of images, for example, the “generator” creates an image and the “discriminator” decides whether the image is real or generated. The generator is constantly trying to trick the discriminator, which is always trying to catch the generator in the act. In most cases, the two competing neural networks are based on CNN architectures, but they can also be variants of RNNs or transformers.
  • Diffusion models incorporate multiple neural networks into a general framework, sometimes integrating different architectures such as CNNs, transformers, and VAEs. Diffusion models learn by compressing data, adding noise, deblurring, and trying to regenerate the original. The popular tool Stable Diffusion uses a VAE encoder and decoder for the first and last steps, respectively, and two CNN variants in the noise/blurring steps.

What are the applications of generative AI?

While the world has only just begun to explore the potential of generative AI applications, it’s easy to see how businesses can benefit from applying it to their operations. Consider how generative AI could change key areas of customer interactions, sales and marketing, software engineering, and research and development.

In customer service, earlier AI technology automated processes and introduced self-service, but it also created new frustrations for customers. Generative AI promises to bring benefits to both customers and service reps, with chatbots that can adapt to different languages ​​and regions, creating a more personalized and accessible customer experience. When human intervention is needed to resolve a customer issue, service reps can collaborate in real-time with generative AI tools to find effective strategies, improving the speed and accuracy of interactions. The speed with which generative AI can access a company’s full knowledge base and synthesize new solutions to customer complaints gives service staff a greater ability to effectively resolve specific customer issues, rather than relying on outdated phone systems and call transfers until an answer is found—or until the customer loses patience.

In marketing, generative AI can automate the integration and analysis of data from disparate sources, which should dramatically speed up the time to insights and drive better-informed decision-making and faster development of go-to-market strategies. Marketers can use this information alongside other AI-generated insights to create new, more targeted advertising campaigns. This reduces the time staff must spend collecting demographic and purchasing behavior data and provides more time to analyze results and generate new ideas.

Tom Stein, president and chief brand officer of B2B marketing agency Stein IAS, notes that every marketing agency, including his own, is exploring these opportunities at a rapid pace. Stein also notes that there are simpler, quicker wins for an agency’s internal processes.

“If we get an RFI [request for information], typically 70 to 80% of the RFI is going to ask for the same information as every other RFI, maybe with some contextual differences specific to that company’s situation,” says Stein, who also served as jury chair for the 2023 Cannes Lions Creative B2B Awards. “It’s not that complicated to put ourselves in a position to have any number of AI tools do that work for us… So if we get that 80% of our time back and can spend it adding value to the RFI and just making it sing, that’s a win all the way around. And there are a number of processes like that.”

In software development, collaborating with generative AI can simplify and speed up processes at every step, from planning to maintenance. During the initial creation phase, generative AI tools can analyze and organize large amounts of data and suggest multiple program configurations. Once coding begins, AI can test and troubleshoot, identify bugs, run diagnostics, and suggest solutions, both before and after release. Thompson notes that because many enterprise software projects incorporate multiple programming languages ​​and disciplines, he and other software engineers have used AI to train themselves in unfamiliar areas much faster than they could previously. He has also used generative AI tools to explain unfamiliar code and identify specific problems.

In research and development (R&D), generative AI can increase the speed and depth of market research during the early phases of product design. AI programs, especially those with imaging capabilities, can then create detailed designs of potential products before simulating and testing them, giving workers the tools they need to make quick and effective adjustments throughout the R&D cycle.

Oracle founder Larry Ellison noted on the June earnings call that “specialized LLMs will accelerate the discovery of new, life-saving drugs.” Drug discovery is an R&D application that exploits generative models’ tendency to “hallucinate” incorrect or unverifiable information, but in a positive way: identifying new molecules and protein sequences in search of novel health treatments. Separately, Oracle subsidiary Cerner Enviza has partnered with the U.S. Food and Drug Administration (FDA) and John Snow Labs to apply AI tools to the challenge of “understanding the effects of drugs on large populations.” Oracle’s AI strategy is to make AI pervasive in its cloud applications and cloud infrastructure.

Use cases

Generative AI has the potential to speed up or completely automate a wide variety of tasks. Businesses should plan deliberate and specific ways to maximize the benefits it can bring to their operations. Here are some specific use cases:

  • Bridging knowledge gaps: With their simple, chat-based user interfaces, generative AI tools can answer workers’ general or specific questions to point them in the right direction when they get stuck on anything from the simplest of queries to complex operations. For example, salespeople can request information about a specific account; programmers can learn new programming languages.
  • Check for errors: Generative AI tools can look for mistakes in any text, from casual emails to professional writing samples. And they can do more than just fix errors — they can explain the what and why to help users learn and improve their work.
  • Improve communication: Generative AI tools can translate text into different languages, adjust tone, create unique messages based on different data sets, and more. Marketing teams can use generative AI tools to create more relevant advertising campaigns, while internal staff can use it to search through past communications and quickly find relevant information and answers to questions without interrupting other employees. Thompson believes this ability to synthesize institutional knowledge into any question or idea a worker might have will fundamentally change the way people communicate within large organizations, empowering knowledge discovery.
  • Ease administrative burden: Businesses with heavy administrative burdens, such as medical coding and billing, can use generative AI to automate complex tasks, such as properly filing documents and analyzing doctor notes. This frees up staff to focus on more practical work, such as patient care or customer service.
  • Analyze medical images for anomalies: Healthcare providers can use generative AI to examine medical records and images and flag salient issues, as well as provide medication recommendations contextualized with the patient’s history.
  • Troubleshoot code: Software engineers can use generative AI models to troubleshoot and adjust their code faster and more reliably than combing through code line by line. They can then ask the tool for more detailed explanations to inform future coding and improve their processes.

What are the benefits of Generative Artificial Intelligence?

The benefits that generative AI can bring to a business stem primarily from three general attributes: knowledge synthesis, human-AI collaboration, and speed. While many of the benefits outlined below are similar to those promised in the past by AI models and automation tools, the presence of one or more of these three attributes can help businesses realize the advantages more quickly, easily, and effectively.

With generative AI, organizations can build custom models trained on their own institutional knowledge and intellectual property (IP), after which knowledge workers can ask the software to collaborate on a task in the same language they might use with a colleague. A specialized generative AI model can respond by synthesizing information from across the corporate knowledge base at astonishing speed. Not only does this approach reduce or eliminate the need for complex and often less effective and more expensive software engineering expertise to build programs specific to these tasks, it is also likely to uncover insights and connections that previous approaches could not.

  • Increase productivity: Knowledge workers can use generative AI to reduce the time they spend on routine daily tasks, such as educating themselves on a new discipline suddenly needed for an upcoming project, organizing or categorizing data, searching the internet for applicable research, or composing emails. By leveraging generative AI, fewer employees can perform tasks that previously required large teams or hours of work in a fraction of the time. For example, a team of programmers might spend hours reviewing faulty code to fix problems, but a generative AI tool could find the errors in a matter of seconds and display them along with suggested solutions. Because some generative AI models possess skills that are about average or better across a broad spectrum of knowledge work competencies, collaborating with a generative AI system can dramatically increase the productivity of its human partner. For example, a junior product manager could double as a mid-level project manager with an AI assistant at their side. All of these capabilities would significantly accelerate a knowledge worker’s ability to complete a project.

  • Reduce costs: Because of their speed, generative AI tools reduce the cost of completing processes, and if it takes half the time to perform a task, the task costs half as much as it would otherwise. Additionally, generative AI can minimize errors, eliminate downtime, and identify redundancies and other costly inefficiencies. There is a trade-off, however: Because of generative AI’s tendency to “freak out,” human oversight and quality control are still necessary. But collaborations between humans and AI are expected to do much more work in less time than humans alone, and better and more accurately than AI tools alone, thereby reducing costs. While testing new products, for example, generative AI can help create more advanced and detailed simulations than older tools could. This ultimately reduces the time and cost of new product testing.

  • Improve customer satisfaction: Customers can get a superior, more personalized experience through generative AI-powered self-service and generative AI tools that “whisper” to customer service representatives, providing them with real-time knowledge. While the AI-powered customer service chatbots we find today may seem limited at times, it’s easy to imagine a much higher quality customer experience powered by a company’s specially trained generative AI model, based on the quality of today’s ChatGPT conversations.

  • Better-informed decision-making: Specially trained, enterprise-specific generative AI models can provide deep insights through scenario modeling, risk assessment, and other sophisticated predictive analytics approaches. Decision-makers can leverage these tools to gain a deeper understanding of their industry and their business’s position within it through personalized recommendations and actionable strategies, based on richer data and faster analysis than human analysts or older technology could generate on their own.

    For example, decision makers can better plan inventory allocation ahead of a busy season by making the most accurate demand forecasts possible, thanks to a combination of internal data collected by their enterprise resource planning (ERP) system and comprehensive external market research, which is then analyzed by a specialized generative AI model. In this case, better allocation decision making minimizes overbuying and stockouts while maximizing potential sales.

  • Launch products faster: Generative AI can rapidly produce product prototypes and first drafts, help refine work in progress, and test/troubleshoot existing projects to find improvements much faster than previously possible.

  • Controlling quality: A specialized generative AI model for the enterprise will likely expose gaps and inconsistencies in the user manuals, videos, and other content a company presents to the public.

China's 'Darwin Monkey' is the world's largest brain-inspired supercomputer

China's 'Darwin Monkey' is the world's largest brain-inspired supercomputer  Researchers in China have introduced the world...