AWS announced the AWS Generative AI Innovation Center, a new program to help customers successfully build and deploy generative artificial intelligence (AI) solutions.
AWS is investing $100 million in the program, which will connect AWS AI and machine learning (ML) experts with customers around the globe to help them envision, design, and launch new generative AI products, services, and processes.
The AWS Generative AI Innovation Center team of strategists, data scientists, engineers, and solutions architects will work with customers to build bespoke solutions that harness the power of generative AI. For example, healthcare and life sciences companies can pursue ways to accelerate drug research and discovery. Manufacturers can build solutions to reinvent industrial design and processes. And financial services companies can develop ways to provide customers with more personalized information and advice.
“The Generative AI Innovation Center is part of our goal to help every organization leverage AI by providing flexible and cost-effective generative AI services for the enterprise, alongside our team of generative AI experts to take advantage of all this new technology has to offer. Together with our global community of partners, we’re working with business leaders across every industry to help them maximize the impact of generative AI in their organizations, creating value for their customers, employees, and bottom line,” said Matt Garman, senior vice president of Sales, Marketing, and Global Services at AWS.
Through no cost workshops, engagements, and training, AWS will help customers imagine and scope the use cases based on best practices and industry expertise. Customers will work closely with generative AI experts from AWS and the AWS Partner Network to select the right models, define paths to navigate technical or business challenges, develop proofs of concepts, and make plans for launching solutions at scale.
Engagements will deliver strategy, tools, and assistance that will help customers use AWS generative AI services through APIs. Some of these services include Amazon CodeWhisperer and Amazon Bedrock, a fully managed service that makes foundational models (FMs) from AI21 Labs, Anthropic, and Stability AI, along with Amazon’s own family of FMs, Amazon Titan.
They can also train and run their models using high-performance infrastructure, including AWS Inferentia-powered Amazon EC2 Inf1 Instances, AWS Trainium-powered Amazon EC2 Trn1 Instances, and Amazon EC2 P5 instances powered by NVIDIA H100 Tensor Core GPUs. Additionally, customers can build, train, and deploy their own models with Amazon SageMaker or use Amazon SageMaker Jumpstart to deploy some of today’s most popular FMs, including Cohere’s large language models, Technology Innovation Institute’s Falcon 40B, and Hugging Face’s BLOOM.
Companies like Highspot, Lonely Planet and Twilio are among the AWS customers who are in the process of leveraging the services offered by the innovation center.