HeatWave User Guide  /  ...  /  Generating New Content

4.3.1 Generating New Content

This section describes how to generate new text-based content using HeatWave GenAI.

Before You Begin

  • Connect to your HeatWave Database System.

  • For Running Batch Queries, add the natural-language queries to a column in a new or existing table.

Generating Content

To generate text-based content using HeatWave GenAI, do the following:

  1. To load the LLM in HeatWave memory, use the ML_MODEL_LOAD routine:

    call sys.ML_MODEL_LOAD('LLM', NULL);

    Replace LLM with the name of the LLM that you want to use. To view the lists of supported LLMs, see LLMs.

    For example:

    call sys.ML_MODEL_LOAD('mistral-7b-instruct-v1', NULL);

    This step is optional. The ML_GENERATE routine loads the specified LLM too. But it takes a bit longer to load the LLM and generate the output when you run it for the first time.

  2. To define your natural-language query, set the @query session variable:

    set @query="QueryInNaturalLanguage";

    Replace QueryInNaturalLanguage with a natural-language query of your choice. For example:

    set @query="Write an article on Artificial intelligence in 200 words.";
  3. To generate text-based content, pass the query to the LLM using the ML_GENERATE routine with the task parameter set to generation:

    select sys.ML_GENERATE(@query, JSON_OBJECT("task", "generation", "model_id", "LLM", "language", "Language"));

    Replace the following:

    • LLM: LLM to use, which must be the same as the one you loaded in the previous step.

    • Language: the two-letter ISO 639-1 code for the language you want to use. Default language is en, which is English. To view the list of supported languages, see Languages.

    For example:

    select sys.ML_GENERATE(@query, JSON_OBJECT("task", "generation", "model_id", "mistral-7b-instruct-v1", "language", "en"));

    Text-based content that is generated by the LLM in response to your query is printed as output. It looks similar to the text output shown below:

      | {"text": " Artificial Intelligence, commonly referred to as AI, is a
      rapidly growing field that focuses on creating intelligent machines capable
      of performing tasks that typically require human intelligence. These tasks
      include things like understanding natural language, recognizing images, and
      making decisions.\n\nAI technology has come a long way in recent years, thanks
      to advances in machine learning and deep learning algorithms. These algorithms
      allow machines to learn from data and improve their performance over time.
      This has led to the development of more advanced AI systems, such as virtual
      assistants like Siri and Alexa, which can help users with tasks like setting
      reminders and answering questions.\n\nAI is also being used in a variety of other
      industries, including healthcare, finance, and transportation. In healthcare, AI
      is being used to help doctors diagnose diseases and develop treatment plans. In
      finance, AI is being used to detect fraud and make investment decisions. In
      transportation, AI is being used to develop self-driving cars and improve traffic
      flow.\n\nDespite the many benefits of AI, there are also concerns about its potential
      impact on society. Some worry that AI could lead to job displacement and a loss of
      privacy. Others worry that AI could be used for malicious purposes, such as
      cyber attacks or surveillance.\n"} |

Running Batch Queries

To run multiple generation queries in parallel, use the ML_GENERATE_TABLE routine. This method is faster than running the ML_GENERATE routine multiple times.

To run batch queries using ML_GENERATE_TABLE, perform the following steps:

  1. To load the LLM in HeatWave memory, use the ML_MODEL_LOAD routine:

    call sys.ML_MODEL_LOAD('LLM', NULL);

    Replace LLM with the name of the LLM that you want to use. To view the lists of supported LLMs, see LLMs.

    For example:

    call sys.ML_MODEL_LOAD('mistral-7b-instruct-v1', NULL);

    This step is optional. The ML_GENERATE_TABLE routine loads the specified LLM too. But it takes a bit longer to load the LLM and generate the output when you run it for the first time.

  2. In the ML_GENERATE_TABLE routine, specify the table columns containing the input queries and for storing the generated text-based responses:

    select sys.ML_GENERATE_TABLE(InputDBName.InputTableName.InputColumn, OutputDBName.OutputTableName.OutputColumn, JSON_OBJECT("task", "generation", "model_id", "LLM", "language", "Language"));

    Replace the following:

    • InputDBName: the name of the database that contains the table column where your input queries are stored.

    • InputTableName: the name of the table that contains the column where your input queries are stored.

    • InputColumn: the name of the column that contains input queries.

    • OutputDBName: the name of the database that contains the table where you want to store the generated outputs. This can be the same as the input database.

    • OutputTableName: the name of the table where you want to create a new column to store the generated outputs. This can be the same as the input table. If the specified table doesn't exist, a new table is created.

    • OutputColumn: the name for the new column where you want to store the output generated for the input queries.

    • LLM: LLM to use, which must be the same as the LLM you loaded in the previous step.

    • Language: the two-letter ISO 639-1 code for the language you want to use. Default language is en, which is English. To view the list of supported languages, see Languages.

    For example:

    select sys.ML_GENERATE_TABLE(demo_db.input_table.Input, demo_db.output_table.Output, JSON_OBJECT("task", "generation", "model_id", "mistral-7b-instruct-v1", "language", "en"));

    To learn more about the available routine options, see ML_GENERATE_TABLE Syntax.