AI

The AI API provides developers with seamless access to AI functionality without requiring API keys, configuration, or extra dependencies.

Some users might not have access to this API. If a user doesn't have access to Raycast Pro, they will be asked if they want to get access when your extension calls the AI API. If the user doesn't wish to get access, the API call will throw an error.

You can check if a user has access to the API using environment.canAccess(AI).

API Reference

AI.ask

Ask AI anything you want. Use this in “no-view” Commands, effects, or callbacks. In a React component, you might want to use the useAI util hook instead.

Signature

async function ask(prompt: string, options?: AskOptions): Promise<string> & EventEmitter;

Example

import { AI, Clipboard } from "@raycast/api";

export default async function command() {
  const answer = await AI.ask("Suggest 5 jazz songs");

  await Clipboard.copy(answer);
}

Parameters

NameDescriptionType

prompt*

The prompt to ask the AI.

string

options

Options to control which and how the AI model should behave.

Return

A Promise that resolves with a prompt completion.

Types

AI.Creativity

Concrete tasks, such as fixing grammar, require less creativity while open-ended questions, such as generating ideas, require more.

type Creativity = "none" | "low" | "medium" | "high" | "maximum" | number;

If a number is passed, it needs to be in the range 0-2. For larger values, 2 will be used. For lower values, 0 will be used.

AI.Model

The AI model to use to answer to the prompt. Defaults to AI.Model["OpenAI_GPT3.5-turbo"].

Enumeration members

ModelDescription

OpenAI_GPT4

GPT-4 is OpenAI's most capable model with broad general knowledge, allowing it to follow complex instructions and solve difficult problems.

OpenAI_GPT4-turbo

GPT-4 Turbo from OpenAI has a big context window that fits hundreds of pages of text, making it a great choice for workloads that involve longer prompts.

OpenAI_GPT4o

GPT-4o is the most advanced and fastest model from OpenAI, making it a great choice for complex everyday problems and deeper conversations.

OpenAI_GPT4o-mini

GPT-4o mini is a highly intelligent and fast model that is ideal for a variety of everyday tasks.

Anthropic_Claude_Haiku

Claude 3 Haiku is Anthropic's fastest model, with a large context window that makes it ideal for analyzing code, documents, or large amounts of text.

Anthropic_Claude_Sonnet

Claude 3.5 Sonnet from Anthropic has enhanced intelligence with increased speed. It excels at complex tasks like visual reasoning or workflow orchestrations.

Anthropic_Claude_Opus

Claude 3 Opus is Anthropic's most intelligent model, with best-in-market performance on highly complex tasks. It stands out for remarkable fluency.

Perplexity_Llama3.1_Sonar_Small

Perplexity's Llama 3.1 Sonar Small is built for speed. It quickly gives you helpful answers using the latest internet knowledge while minimizing hallucinations.

Perplexity_Llama3.1_Sonar_Large

Perplexity's advanced model. Can handle complex questions. It considers current web knowledge to provide well-reasoned, in-depth answers.

Perplexity_Llama3.1_Sonar_Huge

Perplexity's most advanced model. Offers performance that is on par with state of the art models today.

Llama3.1_70B

Llama 3.1 70B is a versatile open-source model from Meta suitable for complex reasoning tasks, multilingual interactions, and extensive text analysis. Powered by Groq.

Llama3.1_8B

Llama 3.1 8B is an open-source model from Meta, optimized for instruction following and high-speed performance. Powered by Groq.

Llama3_70B

Llama 3 70B from Meta is a highly capable open-source LLM that can serve as a tool for various text-related tasks. Powered by Groq.

Llama3.1_405B

Llama 3.1 405B is Meta's flagship open-source model, offering unparalleled capabilities in general knowledge, steerability, math, tool use, and multilingual translation. Powered by together.ai

MixtraL_8x7B

Mixtral 8x7B from Mistral is an open-source model that demonstrates high performance in generating code and text at an impressive speed. Powered by Groq.

Mistral_Nemo

Mistral Nemo is a small model built in collaboration with NVIDIA, and released under the Apache 2.0 license.

Mistral_Large2

Mistral Large is Mistral's flagship model, capable of code generation, mathematics, and reasoning, with stronger multilingual support.

If a model isn't available to the user, Raycast will fallback to a similar one:

  • AI.Model.Anthropic_Claude_Opus and AI.Model.Anthropic_Claude_Sonnet -> AI.Model.Anthropic_Claude_Haiku

  • AI.Model.OpenAI_GPT4 and AI.Model["OpenAI_GPT4-turbo"] -> AI.Model["OpenAI_GPT4o-mini"]

  • AI.Model["Perplexity_Llama3.1_Sonar_Large"] and AI.Model["Perplexity_Llama3.1_Sonar_Huge"] -> AI.Model["Perplexity_Llama3.1_Sonar_Small"]

  • AI.Model.Mistral_Large2 -> AI.Model.Mistral_Nemo

AI.AskOptions

Properties

PropertyDescriptionType

creativity

Concrete tasks, such as fixing grammar, require less creativity while open-ended questions, such as generating ideas, require more. If a number is passed, it needs to be in the range 0-2. For larger values, 2 will be used. For lower values, 0 will be used.

model

The AI model to use to answer to the prompt.

signal

Abort signal to cancel the request.

AbortSignal

Last updated