TechieTricks.com
In the latest salvo in the ongoing war between cloud server providers to offer their customers the most comprehensive set of generative AI models,...


In the latest salvo in the ongoing war between cloud server providers to offer their customers the most comprehensive set of generative AI models, Amazon Web Services (AWS) Principal Developer Advocate Donnie Prakoso today announced in a blog post that the cloud leader would be bringing open source LLMs from red-hot French startup Mistral to Amazon Bedrock, its managed service for gen AI offerings and application development launched last year.

Specifically, two Mistral models — Mistral 7B and Mixtral 8x7B, are set to be available through the service, though no date has been specified, only “coming soon.” When they are ready, they will appear here.

Advantages of Mistral’s models for AWS customers

According to Prakoso’s blog post, Mistral 7B is engineered for efficiency, requiring low memory while delivering high throughput, and supports a wide array of use cases including text summarization, classification, completion, and code completion.

On the other hand, Mixtral 8x7B is even more powerful, using a Mixture-of-Experts (MoE) model to perform text summarization, question answering, text classification, text completion, and code generation in more languages — English, French, German, Spanish, Italian.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

Until recently, Mixtral 8x7B was rated the highest open-source LLM available in the world, nearing GPT-4 level performance on some tasks, but Smaug-72B recently overtook it.

Why put Mistral on Bedrock?

AWS’s choice to incorporate Mistral AI’s models into Amazon Bedrock is informed by several compelling factors, according to Prasko’s blog.

  • Cost-Performance Balance: Mistral AI’s models exemplify an excellent balance between cost and performance. Their efficient, affordable, and scalable nature makes them an attractive option for developers and organizations looking to leverage generative AI without incurring prohibitive costs.
  • Fast Inference Speed: With an emphasis on low latency, low memory requirements, and high throughput, Mistral AI models are optimized for fast inference speeds. This feature is crucial for scaling production use cases efficiently.
  • Transparency and Customizability: Mistral AI champions transparency and customizability in its models, enabling organizations to comply with stringent regulatory requirements.
  • Accessibility: The models are designed to be accessible to a broad audience, facilitating the integration of generative AI features into applications across various organizations, regardless of their size.

It makes total sense for AWS to offer a wide range of models, and the move follows its cloud competitor Microsoft adding Meta’s open-source Llama AI models to its Azure AI Studio, the rival to Bedrock. But it also comes on the heels of big investment from Amazon into another closed-source rival AI model provider, Anthropic, as well as reports Amazon is working furiously to develop its own in-house gen AI foundation models.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link

techietr