Skip to main content

AI providers

The AI Assistant extension supports multiple AI providers:

  • ChatGPT (OpenAI)
  • Anthropic Claude
  • Google Gemini

You can either set a single provider/model to be used across extensions or explicitly choose different providers/models for specific tasks. To do this, the extension offers multi-level provider configuration with a clear priority order:

  • Global AI provider settings: located in Stores -> Configuration -> Mirasvit extensions -> Developer -> AI configuration. These defaults apply to all Mirasvit extensions that use AI. Lowest priority.
  • Module AI provider settings: located in Stores -> Configuration -> Mirasvit extensions -> AI Assistant. Use these when you want a different provider or model specifically for the AI Assistant module. Higher priority than Global.
  • Per-item AI provider settings: located inside a prompt, automation rule, or prompt popup. These settings override all others and are applied when that item runs. Highest priority.

Connecting AI providers

To connect an AI provider, fill out the following fields:

Use these settings to configure OpenAI provider.

  • Enable OpenAI provider:

    • Yes: use global AI settings from Core module.
    • No: use Assistant module provider settings below.
  • Open API key: paste the API key into the field OpenAI Secret Key in order to use the artificial intellect in your store.

    note

    The AI Assistant extension accesses ChatGPT via an API. You need to sign up for a ChatGPT account in order to obtain the OpenAI secret key.

    Generate the OpenAI secret API key on page platform.openai.com/account/api-keys. Click Create new secret key and copy the key.

    image

    New secret key

    Using ChatGPT API costs money, which are billed from your ChatGPT account. The total pricing depends on the number of tokens generated by this language model. Refer to openai.com/api/pricing/ for more info on prices.

    • Default OpenAI model: choose a most suitable language models for you to utilize. AI Assistant works via various language models.
      • GPT-5 (gpt-5): is the latest, high-intelligent model, most powerful model. Maximal support input is 400 000 tokens. It was trained on data existing before May 2024.
      • GPT-5 mini (gpt-5-mini): is a faster, more cost-efficient version of GPT-5. It's great for well-defined tasks and precise prompts. Maximal support input is 400 000 tokens. It was trained on data existing before May 2024.
      • GPT-5 nano (gpt-5-nano): is the fastest, cheapest version of GPT-5. It's great for summarization and classification tasks. Maximal support input is 400 000 tokens. It was trained on data existing before May 2024.
      • GPT-4.1 (gpt-4.1): high-intelligent model suited for complex tasks and processing complex-structured content. Maximal support input is 1 047 576 tokens. It was trained on data existing before Jun 2024.
      • GPT-4.1 mini (gpt-4.1-mini): provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases. Maximal support input is 1 047 576 tokens. It was trained on data existing before Jun 2024.
      • GPT-4.1 nano (gpt-4.1-nano): is the fastest, most cost-effective GPT-4.1 model. Maximal support input is 1 047 576 tokens. It was trained on data existing before Jun 2024.
      • GPT-4o (gpt-4o): is improved GPT-4 Turbo model. It is 2 times faster and up to 50% cheaper than GPT-4 Turbo model. Also has improved capabilities in non-English languages and uses a new tokenizer which tokenizes non-English text more efficiently than GPT-4 Turbo. Maximal support input is 128 000 tokens. It was trained on data existing before Oct 2023.
      • GPT-4o mini (gpt-4o-mini): is a fast, affordable small model for focused tasks. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is ideal for fine-tuning, and model outputs from a larger model like GPT-4o can be distilled to GPT-4o-mini to produce similar results at lower cost and latency. Maximal support input is 128 000 tokens. It was trained on data existing before Oct 2023.
      • GPT-4 Turbo (gpt-4-turbo-preview): is improved GPT-4 model featuring improved instruction following, JSON mode and more. Maximal support input is 128 000 tokens. It was trained on data existing before Apr 2023.
      • GPT-4 (gpt-4): is more capable for complex tasks and gives better results on large texts. Maximal support input is 8 192 tokens. It was trained on data existing before Sep 2021.
      • GPT-3.5 (gpt-3.5-turbo): is optimized for chat. It is considered by OpenAI as a most capable GPT-3.5 model, and its price is 1/10th the cost of text-davinci-003. Maximal support input is 16 385 tokens. It was trained on data existing before Sep 2021.
      • GPT-3 (text-davinci-003): legacy model, is suited for any language task. It is configured for longer output of better quality and consistent instruction-following. Its input can be up to 4 097 tokens. It's trained on data before Jun 2021.