How to deploy Codestral on OUTSCALE’s sovereign and secure infrastructure?

by Stéphane Robert

Generative AI is playing an increasingly significant role in software development. By generating precise code snippets, automating testing, and simplifying documentation, these LLMs (Large Language Models) enhance developers’ productivity while freeing up time for innovation.

However, the rise of Generative AI raises a critical question: how can we ensure the sovereignty and security of the generated code?

LLM and Asset Security

Companies and institutions regularly handle sensitive data and codes, but using AI tools hosted on online platforms may expose these strategic assets to various risks.

First, extraterritorial laws like the Cloud Act in the United States allow certain governments to request access to data stored within their jurisdictions, potentially compromising the confidentiality of critical information.

Second, industrial espionage and cyberattacks increase the risk of intrusion, code theft, or even sabotage, particularly through supply chain attacks.

To address these threats, organizations must consider solutions that combine the power of Generative AI with guarantees of security, data sovereignty, and intellectual property protection.

Dassault Systèmes and Mistral AI Partnership

In July 2024, Dassault Systèmes and Mistral AI announced a strategic partnership to deliver reliable industrial solutions based on artificial intelligence to accelerate the Generative Economy. This partnership combines the virtual twin experiences and the sovereign Cloud infrastructure of OUTSCALE (Dassault Systèmes) with the large language models (LLMs) from Mistral AI.

In this context, OUTSCALE has launched an LLM as a Service (LLMaaS) offering, specifically designed to enable rapid development of Generative AI use cases.

This offering is the first to integrate Mistral AI models on a 100% sovereign, secure Cloud qualified with SecNumCloud 3.2, ensuring top-tier performance while providing enhanced protection for sensitive data and intellectual property.

Available Mistral AI Models

 

Available Mistral AI Models

 

As part of their partnership, OUTSCALE and Mistral AI offer a range of Artificial Intelligence models designed to meet the specific needs of companies and institutions.

These models include Mistral Small and Codestral, integrated into the LLMaaS (Large Language Model as a Service) deployed on OUTSCALE’s sovereign Cloud infrastructure.

This offer allows organizations to fully leverage the capabilities of Generative AI, in a sovereign and secure environment.

Mistral Small: A Lightweight and Versatile Model

Mistral Small is designed to offer lightweight Generative AI solutions while optimizing resource consumption. Ideal for projects requiring speed and efficiency without compromising quality, it adapts to a wide range of tasks such as text generation, auto-completion, and data processing from simple instructions. Thanks to its reduced latency, Mistral Small is particularly well-suited for real-time applications or environments that require quick responses.

Mistral Small is effective for:

  • Conversational assistants: Provides rapid and accurate contextual assistance, ideal for smooth interactions.
  • Documentation and text analysis: A powerful tool to automate editing or synthesis tasks effectively.
  • Integration into web applications: Perfect for content management systems or customer service interfaces that require fast processing and lightweight handling.

With Mistral Small, organizations benefit from a high-performance Generative AI model that optimizes their resources while ensuring complete confidentiality, thanks to OUTSCALE’s secure infrastructure.

Codestral: A Powerful AI for Code Generation

Codestral is an AI model specialized in code generation, designed for development teams looking to automate and optimize their programming processes.

This model supports over 80 programming languages, including Python, Java, and C++, and features an extended context window of 128,000 tokens, enabling code analysis and generation for large-scale projects.

Codestral is tailored to:

  • Code generation and completion: Responds to developers’ instructions by producing code segments that adhere to best practices in each language.
  • Unit test creation: Automates test generation to ensure code robustness and reliability.
  • Configuration and orchestration: Can write configuration files for tools like Ansible, Kubernetes, or Terraform.

In addition to its advanced features, Codestral offers exceptional flexibility, particularly thanks to the OUTSCALE environment, which allows organizations to fully secure their data and sensitive code.

This model is especially relevant for organizations handling confidential information, as it ensures that the data being processed never leaves OUTSCALE’s sovereign infrastructure.

OUTSCALE Offering

The OUTSCALE offering is based on a robust infrastructure comprising a Virtual Machine (VM) instance specifically optimized for Mistral models. This instance is equipped with two L40 GPUs, delivering exceptional computational power for applications requiring intensive execution of Generative AI models.

This infrastructure is deployed within a Virtual Private Cloud (VPC) in OUTSCALE’s cloudgouv-eu-west-1 region, ensuring enhanced performance and security to meet the strict requirements of businesses and institutions. Notably, this region is SecNumCloud 3.2 qualified, reflecting adherence to the highest security standards. This qualification ensures that the infrastructure meets rigorous criteria for data protection and compliance with European regulations, providing businesses with complete peace of mind regarding the sovereignty and security of their hosted data.

By combining these material and regional elements, OUTSCALE’s offering provides a secure Generative Artificial Intelligence solution, enabling organizations to fully leverage the advanced capabilities of Mistral AI models, while ensuring data sovereignty and confidentiality.

How to Proceed?

Deploying a Codestral Instance from OUTSCALE Marketplace

Let’s dive into a detailed tutorial on how to deploy a Codestral instance from OUTSCALE Marketplace and use it with Visual Studio Code, enhanced by the Continue extension.

  1. Once logged into the platform, navigate to the Catalog and search for “MISTRAL AI x OUTSCALE” in the left-hand section, then click to view the related products. You will find two OMIs (OUTSCALE Machine Images) titled Codestral and Mistral Small, illustrated in the image below.

Searching for Mistral on OUTSCALE Marketplace

  1. Click on Codestral and you will be directed to the Bundle details page. You should see, as illustrated below, the contents of the package, including the Codestral model and the two Nvidia L40 GPUs.

Mistral Codestral Package Details

  1. In the right-hand section, accept the terms and click “Proceed to Subscription.”
  2. On the second screen, you’ll need to provide your connection credentials (AK/SK), and on the third, the VPC and Subnet you previously configured via Cockpit or Terraform infrastructure code.

Configuring Credentials and Network

  1. Once the entered information is verified, click “Deploy.” After a few minutes, you will see the deployed VM in Cockpit.

Deployed VM in Cockpit

Attach a public IP address to access it.

Using Codestral with Visual Studio Code

From a machine authorized to connect to the instance via security groups, install Visual Studio Code and, in the extensions, search for the Continue extension from the company of the same name.

On the right, you should see an additional icon appear, as shown in the screenshot below.

Continue Icon in Visual Studio Code

At the bottom of the screen that appears, click the gear icon and add the Codestral model to your project. In the [models] section, add the following information, replacing the IP address and model name if they differ:


  "models":
    {
      "title": "OSC-Mistral",
      "model": "codestral-2405",
      "apiBase": "http://xxx.xx.xx.xxx:5000/v1",
      "provider": "mistral"
    },
  ...
            

If you wish to use it for auto-completion, look for the [tabAutocompleteModel] section as indicated below:


  "tabAutocompleteModel":
    {
      "title": "Tab Autocomplete Model",
      "provider": "mistral",
      "model": "codestral-2405",
      "apiBase": "http://xxx.xx.xx.xxx:5000/v1"
    },
            

That’s it; everything is ready for our test.

A Test

I had fun submitting a prompt asking it to generate infrastructure code I am very familiar with: Ansible.

Generate an Ansible playbook to install Apache on Linux.

Guidelines to follow:

- Support multiple Linux families
- Include the proper Apache package name to install based on the OS in a variables file
- Ansible module names must be complete, e.g., ansible.builtin.apt
- Use handlers to restart services if the service configuration is modified
- Use true or false for boolean values
- Provide the Nginx configuration in a template
- Target hosts are machines in the webservers group
- Provide the main playbook
- Provide an example Jinja template for Apache with multiple variables
- Provide variables files for Debian and RedHat families

The result:

Here’s a solution for your problem:

Playbook (main.yml):


- name: Install and configure Apache
  hosts: webservers
  become: true
  vars_files:
    - "{{ ansible_os_family }}.yml"
  tasks:
    - name: Install Apache
      package:
        name: "{{ apache_package }}"
        state: present

    - name: Copy Apache configuration file
      template:
        src: apache.conf.j2
        dest: /etc/httpd/conf/httpd.conf
      notify:
        - Restart Apache
  handlers:
    - name: Restart Apache
      service:
        name: "{{ apache_service }}"
        state: restarted
            

Template (apache.conf.j2):

ServerRoot "{{ apache_root }}"

Listen {{ apache_port }}


    AllowOverride none
    Require all denied


DocumentRoot "{{ apache_document_root }}"

<Directory "{{ apache_document_root }}">
    Options Indexes FollowSymLinks
    AllowOverride None
    Require all granted

Variables for Debian (debian.yml):


apache_package: apache2
apache_service: apache2
apache_root: /etc/apache2
apache_port: 80
apache_document_root: /var/www/html
            

Variables for RedHat (RedHat.yml):


apache_package: httpd
apache_service: httpd
apache_root: /etc/httpd
apache_port: 80
apache_document_root: /var/www/html
            

I’ll let you judge, but the result is truly impressive. It understood the instructions and followed them to the letter. It even provided examples of variable files for Debian and RedHat families, which is very practical.

It also used handlers to restart services if the configuration is modified and used true or false for boolean values. Finally, it utilized a Jinja template for the Nginx configuration. Bravo!

The prompt could still be improved by asking it to properly define the owner and permissions of the configuration files.

Conclusion

By combining the power of Codestral for code generation with the security offered by OUTSCALE, companies can not only improve their productivity but also ensure the sovereignty and security of their code. Whether for industrial applications, complex projects, or critical environments, this solution is a true asset for development teams.

Stay tuned, as new announcements are coming soon!

Related Posts

Close Popup

3DS OUTSCALE uses cookies to ensure to the proper functioning and security of its websites and offer you the best experience possible. You can authorize or reject cookies by clicking on the “ACCEPT” or “REFUSE" buttons respectively.
To learn more, you can check out our Privcacy Policy and modify your preferences at any time by clicking on the “Privacy settings” center.

Close Popup
Privacy Settings saved!
Privacy settings

When you visit a website, it may store or retrieve information from your browser, mainly in the form of cookies. Check your personal cookie services here.

Please note that essential cookies are essential to the operation of the site, and cannot be disabled.

Necessary
To use this website, we use the following cookies which are technically necessary
  • wordpress_test_cookie
  • wordpress_logged_in_
  • wordpress_sec

Save
Open Privacy settings