HeatWave User Guide  /  ...  /  ML_GENERATE

4.8.1 ML_GENERATE

The ML_GENERATE routine uses the specified large language model (LLM) to generate text-based content as a response for the given natural-language query.

This topic contains the following sections:

ML_GENERATE Syntax

mysql> select sys.ML_GENERATE('QueryInNaturalLanguage', [options]);

options: {
  JSON_OBJECT('key','value'[,'key','value'] ...)
    'key','value': {
    ['task', {'generation'|'summarization'}]
    ['model_id', {'LargeLanguageModelID'}]
    ['context', 'Context']
    ['language', 'Language']
    ['temperature', Temperature]
    ['max_tokens', MaxTokens]
    ['top_k', K]
    ['top_p', P]
    ['repeat_penalty', RepeatPenalty]
    ['frequency_penalty', FrequencyPenalty]
    ['presence_penalty', PresencePenalty]
    ['stop_sequences', JSON_ARRAY('StopSequence1'[,'StopSequence2'] ...)]
    }
}

Following are ML_GENERATE parameters:

  • QueryInNaturalLanguage: specifies the natural-language query that is passed to the large language model (LLM) handle.

  • options: specifies optional parameters as key-value pairs in JSON format. It can include the following parameters:

    • task: specifies the task expected from the LLM. Default value is generation. Possible values are:

      • generation: generates text-based content.

      • summarization: generates a summary for existing text-based content.

    • model_id: specifies the LLM to use for the task.

      As of MySQL 9.3.1, default value is llama3.2-3b-instruct-v1. In earlier versions of MySQL, default value is mistral-7b-instruct-v1. Possible values are:

      • The following HeatWave In-Database LLMs are available as of MySQL 9.3.1:

        • llama3.1-8b-instruct-v1

        • llama3.2-1b-instruct-v1

        • llama3.2-3b-instruct-v1

        • mistral-7b-instruct-v3

      • The following OCI Generative AI Service LLMs are available as of MySQL 9.2.2:

        • meta.llama-3.2-90b-vision-instruct

        • meta.llama-3.3-70b-instruct

      • The following OCI Generative AI Service LLMs are available as of MySQL 9.1.2:

        • cohere.command-r-plus-08-2024

        • cohere.command-r-08-2024

        • meta.llama-3.1-405b-instruct

      • The following HeatWave In-Database LLMs are available as of MySQL 9.0.0:

        • mistral-7b-instruct-v1

        • llama2-7b-v1

        • llama3-8b-instruct-v1

      To view the lists of available LLMs, see HeatWave In-Database LLMs and OCI Generative AI Service LLMs.

      Note

      The summarization task supports HeatWave In-Database LLMs only.

    • context: specifies the context to be used for augmenting the query and guide the text generation of the LLM. Default value is NULL.

    • language: specifies the language to be used for writing queries, ingesting documents, and generating the output. To set the value of the language parameter, use the two-letter ISO 639-1 code for the language. This parameter is available as of MySQL 9.0.1-u1.

      Default value is en.

      For possible values, to view the list of supported languages, see Languages.

    • temperature: specifies a non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations.

      Default value is 0 for all LLMs.

      Possible values are float values between:

      It is suggested that:

      • To generate the same output for a particular prompt every time you run it, set the temperature to 0.

      • To generate a random new statement for a particular prompt every time you run it, increase the temperature.

    • max_tokens: specifies the maximum number of tokens to predict per generation using an estimate of three tokens per word. Default value is 256. Possible values are:

      • For Llama 3.1 and 3.2 LLMs, integer values between 1 and 128256.

      • For Mistral V3 LLM, integer values between 1 and 32000.

      • For Mistral V1 LLM, integer values between 1 and 8000.

      • For Llama 2 and 3 LLMs, integer values between 1 and 4096.

      • For OCI Generative AI Service LLMs, integer values between 1 and 4000.

    • top_k: specifies the number of top most likely tokens to consider for text generation at each step. Default value is 40, which means that top 40 most likely tokens are considered for text generation at each step. Possible values are integer values between 0 and 32000.

    • top_p: specifies a number, p, and ensures that only the most likely tokens with the sum of probabilities p are considered for generation at each step. A higher value of p introduces more randomness into the output. Default value is 0.95. Possible values are float values between 0 and 1.

      • To disable this method, set to 1.0 or 0.

      • To eliminate tokens with low likelihood, assign p a lower value. For example, if set to 0.1, tokens within top 10% probability are included.

      • To include tokens with low likelihood, assign p a higher value. For example, if set to 0.9, tokens within top 90% probability are included.

      If you are also specifying the top_k parameter, the LLM considers only the top tokens whose probabilities add up to p percent. It ignores the rest of the k tokens.

    • repeat_penalty: assigns a penalty when a token appears repeatedly. High penalties encourage less repeated tokens and produce more random outputs. Default value is 1.1. Possible values are float values between 0 and 2.

      Note

      This parameter is supported for HeatWave In-Database LLMs only.

    • frequency_penalty: assigns a penalty when a token appears frequently. High penalties encourage less repeated tokens and produce more random outputs. Default value is 0. Possible values are float values between 0 and 1.

    • presence_penalty: assigns a penalty to each token when it appears in the output to encourage generating outputs with tokens that haven't been used. This is similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, irrespective of their exact frequencies.

      Note

      This parameter is supported for OCI Generative AI Service LLMs only.

      Default value is 0. Possible values are:

      • For Cohere LLMs, float values between 0 and 1.

      • For Meta LLMs, float values between -2 and 2.

    • stop_sequences: specifies a list of characters such as a word, a phrase, a newline, or a period that tells the LLM when to end the generated output. If you have more than one stop sequence, then the LLM stops when it reaches any of those sequences. Default value is NULL.

Syntax Examples

  • Generating text-based content in English using the mistral-7b-instruct-v1 model:

    mysql> select sys.ML_GENERATE("What is AI?", JSON_OBJECT("task", "generation", "model_id", "mistral-7b-instruct-v1", "language", "en"));
  • Summarizing English text using the mistral-7b-instruct-v1 model:

    mysql> select sys.ML_GENERATE(@text, JSON_OBJECT("task", "summarization", "model_id", "mistral-7b-instruct-v1", "language", "en"));

    Where, @text is set as shown below:

    SET @text="Artificial Intelligence (AI) is a rapidly growing field that has the potential to
    revolutionize how we live and work. AI refers to the development of computer systems that can
    perform tasks that typically require human intelligence, such as visual perception, speech
    recognition, decision-making, and language translation.\n\nOne of the most significant developments in
    AI in recent years has been the rise of machine learning, a subset of AI that allows computers to learn
    from data without being explicitly programmed. Machine learning algorithms can analyze vast amounts
    of data and identify patterns, making them increasingly accurate at predicting outcomes and making
    decisions.\n\nAI is already being used in a variety of industries, including healthcare, finance, and
    transportation. In healthcare, AI is being used to develop personalized treatment plans for patients
    based on their medical history and genetic makeup. In finance, AI is being used to detect fraud and make
    investment recommendations. In transportation, AI is being used to develop self-driving cars and improve
    traffic flow.\n\nDespite the many benefits of AI, there are also concerns about its potential impact on
    society. Some worry that AI could lead to job displacement, as machines become more capable of performing
    tasks traditionally done by humans. Others worry that AI could be used for malicious ";