HeatWave User Guide  /  ...  /  Using Your Own Embeddings With Retrieval-Augmented Generation

4.6.2 Using Your Own Embeddings With Retrieval-Augmented Generation

As of MySQL 9.2.1, HeatWave GenAI lets you use tables containing your own vector embedding to run retrieval-augemented generation (RAG) with vector search. The ML_RAG and ML_RAG_TABLE routines let you specify the table column names to use as filters for finding relevant tables for context retrieval.

In addition to the specified column names, the ML_RAG and ML_RAG_TABLE routines use the name of the embedding model used to embed the input query to find relevant embedding tables for context retrieval.

Following sections in this topic describe how you can use your own embedding table for context retrieval:

Before You Begin

  • Review the Requirements.

  • Connect to your HeatWave Database System.

  • Create a table that satisfies the following criteria:

    • To qualify as a valid embedding table, the table must contain the following columns:

      • A string column containing the text segments.

      • A vector column containing the vector embeddings of the text segments.

    • A comment on the vector column must specify the name of the embedding model used to generate the vector embeddings.

    Following is an example of a valid embedding table that can be used for context retrieval:

    Press CTRL+C to copy
    CREATE TABLE demo_table (id INT AUTO_INCREMENT, demo_text TEXT, string_embedding TEXT, demo_embedding VECTOR (3) COMMENT 'GENAI_OPTIONS=EMBED_MODEL_ID=demo_embedding_model', primary key (id)); INSERT INTO demo_table (demo_text, string_embedding) VALUES('What is MySQL??', '[0,1,0]'); INSERT INTO demo_table (demo_text, string_embedding) VALUES('What is HeatWave?', '[0,0,1]'); INSERT INTO demo_table (demo_text, string_embedding) VALUES('What is HeatWave GenAI?', '[0,1,1]'); UPDATE demo_table SET demo_embedding=STRING_TO_VECTOR(string_embedding); ALTER TABLE demo_table DROP COLUMN string_embedding;

    To learn how to generate vector embeddings and embedding tables using HeatWave GenAI, see Generating Vector Embeddings.

  • If you want to use an inbuilt vector store table along with your own embedding table, complete the steps to set up the vector store.

  • For Running Batch Queries, add the natural-language queries to a column in a new or existing table. To use the name of an embedding model that is not available in HeatWave for running RAG, also add the vector embeddings of the input queries to a column of the input table.

Using Embeddings From an Embedding Model Available in HeatWave

To use an embedding table containing vector embeddings from an embedding model that is available in HeatWave, you can set the vector_store_columns parameter to specify the columns and column names used by the ML_RAG routine to filter tables for context retrieval. However, since the inbuilt vector store tables only use the predefined column names, if you change a column name used for filtering tables, the inbuilt vector store tables are filtered out and not used for context retrieval.

The example in this section uses the following table:

Press CTRL+C to copy
CREATE TABLE demo_minilm_table (id INT AUTO_INCREMENT, demo_text_column TEXT, primary key (id)); INSERT INTO demo_minilm_table (demo_text_column) VALUES('What is MySQL?'); INSERT INTO demo_minilm_table (demo_text_column) VALUES('What is HeatWave'); INSERT INTO demo_minilm_table (demo_text_column) VALUES('What is HeatWave GenAI?'); call sys.ML_EMBED_TABLE('demo_db.demo_minilm_table.demo_text_column', 'demo_db.demo_minilm_table.demo_embedding_column', JSON_OBJECT('model_id', 'all_minilm_l12_v2'));

To run RAG, perform the following steps:

  1. To change the column names to use to filter tables for context retrieval, then set the routine options as shown below:

    Press CTRL+C to copy
    set @options = JSON_OBJECT("vector_store_columns", JSON_OBJECT("segment", "TextSegmentColumnName", "segment_embedding", "VectorEmbeddingColumnName"), "embed_model_id", "EmbeddingModelName", "model_options", JSON_OBJECT("language", "Language"));

    Replace the following:

    • TextSegmentColumnName: the name of the embedding table column that contains the text segments in natural language. Default value is segment.

    • VectorEmbeddingColumnName: the name of the embedding table column that contains vector embeddings of the natural-language text segments. Default value is segment_embedding.

    • EmbeddingModelName: the name of the embedding model to use to generate the vector embeddings for the input query. The routine uses this embedding model name to find relevant tables for context retrieval. Default value is minilm if the output language is set to English and multilingual-e5-small if the output language is set to a language other than English.

      For possible values, to view the list of available embedding models, see HeatWave In-Database Embedding Models and OCI Generative AI Service Embedding Models.

    • Language: the two-letter ISO 639-1 code for the language you want to use for generating the output. The model_option option parameter language is required only if you want to use a language other than English. Default language is en, which is English. To view the list of supported languages, see Languages.

    For example:

    Press CTRL+C to copy
    set @options = JSON_OBJECT("vector_store_columns", JSON_OBJECT("segment", "demo_text_column", "segment_embedding", "demo_embedding_column"), "embed_model_id", "all_minilm_l12_v2", "model_options", JSON_OBJECT("language", "en"));

    In this example, all embedding tables containing a string column demo_text_column and a vector column demo_embedding_column, which contains vector embeddings from all_minilm_l12_v2, are used for context retrieval.

    Similarly, you can use the vector_store_columns parameter to specify the following column names for the routine to filter relevant tables for context retrieval:

    • document_name: name of a column containing the document names. This column can be of any data type supported by HeatWave. Default value is document_name.

    • document_id: name of an integer column containing the document IDs. Default value is document_id.

    • metadata: name of a JSON column containing additional table metadata. Default value is metadata.

    • segment_number: name of an integer column containing the segment numbers. Default value is segment_number.

    Since these are optional columns, if these column values are not set, then the routine does not use these columns to filter tables.

  2. To define your natural-language query, set the @query variable:

    Press CTRL+C to copy
    set @query="AddYourQuery";

    Replace AddYourQuery with your natural-language query.

    For example:

    Press CTRL+C to copy
    set @query="What is AutoML?";
  3. To retrieve the augmented prompt and generate the output, use the ML_RAG routine:

    Press CTRL+C to copy
    call sys.ML_RAG(@query,@output,@options);
  4. Print the output:

    Press CTRL+C to copy
    select JSON_PRETTY(@output);

    The output is similar to the following:

    Press CTRL+C to copy
    | { "text": " AutoML is a machine learning technique that uses algorithms to automatically generate and optimize models for specific tasks, without the need for manual intervention. It combines the power of machine learning with the ease of use of traditional programming tools, allowing users to quickly and easily create accurate models for their data.", "citations": [ { "segment": "What is MySQL?", "distance": 0.7121, "document_name": "" }, { "segment": "What is HeatWave?", "distance": 0.7192, "document_name": "" }, { "segment": "What is HeatWave GenAI?", "distance": 0.7905, "document_name": "" } ], "vector_store": [ "`demo_db`.`demo_minilm_table`" ], "retrieval_info": { "method": "n_citations", "threshold": 0.7905 } } |

    The vector_store section lists the name of the embedding table that is used to retrieve context for generating the output.

Using Embeddings From an Embedding Model Not Available in HeatWave

To use a table containing vector embeddings from an embedding model that is not available in HeatWave, the ML_RAG routine lets you provide the vector embedding of the input query and the name of the embedding model that you used to embed the input query as well as the vector embeddings stored in your embedding table. When you provide the vector embedding of the input query, the routine skips embedding the query and proceeds with the similarity search, context retrieval, and RAG. However, in this case, you cannot use the inbuilt vector store tables for context retrieval.

The example in this section uses the following table:

Press CTRL+C to copy
CREATE TABLE demo_table (id INT AUTO_INCREMENT, demo_text TEXT, string_embedding TEXT, demo_embedding VECTOR (3) COMMENT 'GENAI_OPTIONS=EMBED_MODEL_ID=demo_embedding_model', primary key (id)); INSERT INTO demo_table (demo_text, string_embedding) VALUES('What is MySQL??', '[0,1,0]'); INSERT INTO demo_table (demo_text, string_embedding) VALUES('What is HeatWave?', '[0,0,1]'); INSERT INTO demo_table (demo_text, string_embedding) VALUES('What is HeatWave GenAI?', '[0,1,1]'); UPDATE demo_table SET demo_embedding=STRING_TO_VECTOR(string_embedding); ALTER TABLE demo_table DROP COLUMN string_embedding;

To run RAG using a table that contains vector embeddings from an embedding model that is not available in HeatWave, perform the following steps:

  1. Provide the vector embedding of the input query:

    Press CTRL+C to copy
    set @query_embedding = to_base64(string_to_vector('VectorEmbeddingOfTheQuery'));

    Replace VectorEmbeddingOfTheQuery with the vector embedding of your input query.

    For example:

    Press CTRL+C to copy
    set @query_embedding = to_base64(string_to_vector('[0,1,0]'));
  2. To specify column names for the ML_RAG routine to find relevant tables for context retrieval, set the routine options:

    Press CTRL+C to copy
    set @options = JSON_OBJECT("vector_store_columns", JSON_OBJECT("segment", "TextSegmentColumnName", "segment_embedding", "VectorEmbeddingColumnName"), "embed_model_id", "EmbeddingModelName", "query_embedding", @query_embedding, "model_options", JSON_OBJECT("language", "Language"));

    Replace the following:

    • TextSegmentColumnName: the name of the embedding table column that contains the text segments in natural language.

    • VectorEmbeddingColumnName: the name of the embedding table column that contains vector embeddings of the natural-language text segments.

    • EmbeddingModelName: the name of the embedding model that you used to generate the vector embeddings for the input query and embedding tables.

    • Language: the two-letter ISO 639-1 code for the language you want to use for generating the output. The model_option option parameter language is required only if you want to use a language other than English. Default language is en, which is English. To view the list of supported languages, see Languages.

    For example:

    Press CTRL+C to copy
    set @options = JSON_OBJECT("vector_store_columns", JSON_OBJECT("segment", "demo_text", "segment_embedding", "demo_embedding"), "embed_model_id", "demo_embedding_model", "query_embedding", @query_embedding, "model_options", JSON_OBJECT("language", "en"));

    In this example, embedding tables containing a string column demo_text and a vector column demo_embeddings which contains vector embeddings from demo_embedding_model are used for context retrieval.

    Similarly, you can use the vector_store_columns parameter to specify the following column names for the routine to filter relevant tables for context retrieval:

    • document_name: name of a column containing the document names. This column can be of any data type supported by HeatWave.

    • document_id: name of an integer column containing the document IDs.

    • metadata: name of a JSON column containing additional table metadata.

    • segment_number: name of an integer column containing the segment numbers.

    Since these are optional columns, if these column values are not set, then the routine does not use these columns to filter tables.

  3. To define your natural-language query, set the @query variable:

    Press CTRL+C to copy
    set @query="AddYourQuery";

    Replace AddYourQuery with your natural-language query.

    For example:

    Press CTRL+C to copy
    set @query="What is AutoML?";
  4. To retrieve the augmented prompt, use the ML_RAG routine:

    Press CTRL+C to copy
    call sys.ML_RAG(@query,@output,@options);
  5. Print the output:

    Press CTRL+C to copy
    select JSON_PRETTY(@output);

    The output is similar to the following:

    Press CTRL+C to copy
    | { "text": " AutoML is a subfield of machine learning that focuses on automating the process of building and training machine learning models. It involves using algorithms and techniques to automatically select features, tune hyperparameters, and evaluate model performance, without requiring human intervention. AutoML can be used for a variety of tasks, including classification, regression, clustering, and anomaly detection.", "citations": [ { "segment": "What is MySQL??", "distance": 0.0, "document_name": "" }, { "segment": "What is HeatWave GenAI?", "distance": 0.2929, "document_name": "" }, { "segment": "What is HeatWave?", "distance": 1.0, "document_name": "" } ], "vector_store": [ "`demo_db`.`demo_table`" ], "retrieval_info": { "method": "n_citations", "threshold": 1.0 } } |

    The vector_store section lists the name of the embedding table that is used to retrieve context for generating the output.

Using Your Embedding Table With a Vector Store Table

By default, the ML_RAG routine uses all predefined columns and column names available in the inbuilt vector store table to filter tables for context retrieval. This means that if your embedding table does not contain all columns that are available in an inbuilt vector store table, then your embedding table is filtered out and is not used for context retrieval by the routine.

Therefore, if you want to use an inbuilt vector store table along with your own embedding table for context retrieval, your embedding table must satisfy the following additional requirements:

  • Since the inbuilt vector store tables, which are set up using Asynchronous Load or Auto Parallel Load, use predefined column names, the column names in your embedding tables must match the predefined inbuilt vector store table column names as given below:

    • segment: name of the mandatory string column containing the text segments.

    • segment_embedding: name of the mandatory vector column containing the vector embeddings of the text segments.

    • document_name: name of the optional column containing the document names. This column can be of any data type supported by HeatWave.

    • document_id: name of the optional integer column containing the document IDs.

    • metadata: name of the optional JSON column containing metadata for the table.

    • segment_number: name of the optional integer column containing segment number.

  • The vector embeddings in your embedding table must be from the same embedding model as the vector store table.

The example in this section uses the following table:

Press CTRL+C to copy
CREATE TABLE demo_minilm_table (id INT AUTO_INCREMENT, segment TEXT, primary key (id)); INSERT INTO demo_minilm_table (segment) VALUES('What is MySQL?'); INSERT INTO demo_minilm_table (segment) VALUES('What is HeatWave'); INSERT INTO demo_minilm_table (segment) VALUES('What is HeatWave GenAI?'); call sys.ML_EMBED_TABLE('demo_db.demo_minilm_table.segment', 'demo_db.demo_minilm_table.segment_embedding', JSON_OBJECT('model_id', 'all_minilm_l12_v2'));

To run RAG using an inbuilt vector store table and your embedding table, perform the following steps:

  1. Set the routine options:

    • If your embedding table contains all the mandatory and optional columns as the inbuilt vector store table, then set the routine options as shown below:

      Press CTRL+C to copy
      set @options = JSON_OBJECT("embed_model_id", "EmbeddingModelName", "model_options", JSON_OBJECT("language", "Language"));
      • EmbeddingModelName: the name of the embedding model to use to generate the vector embeddings for the input query. The routine uses this embedding model name to find relevant tables for context retrieval. Default value is minilm if the output language is set to English and multilingual-e5-small if the output language is set to a language other than English.

        For possible values, to view the list of available embedding models, see HeatWave In-Database Embedding Models and OCI Generative AI Service Embedding Models.

      • Language: the two-letter ISO 639-1 code for the language you want to use for generating the output. The model_option option parameter language is required only if you want to use a language other than English. Default language is en, which is English. To view the list of supported languages, see Languages.

      For example:

      Press CTRL+C to copy
      set @options = JSON_OBJECT("embed_model_id", "all_minilm_l12_v2", "model_options", JSON_OBJECT("language", "en"));
    • If your embedding table contains the same mandatory columns as that of an inbuilt vector store table, which are:

      • A text column with the name segment.

      • A vector column segment_embedding.

      Then, set the routine options as shown below:

      Press CTRL+C to copy
      set @options = JSON_OBJECT("vector_store_columns", JSON_OBJECT("segment", "segment", "segment_embedding", "segment_embedding"), "embed_model_id", "all_minilm_l12_v2", "model_options", JSON_OBJECT("language", "en"));

      In this example, both embedding tables and vector store tables that contain a string column segment and a vector column segment_embedding which contains vector embeddings from all_minilm_l12_v2 are used for context retrieval.

  2. To define your natural-language query, set the @query variable:

    Press CTRL+C to copy
    set @query="AddYourQuery";

    Replace AddYourQuery with your natural-language query.

    For example:

    Press CTRL+C to copy
    set @query="What is AutoML?";
  3. To retrieve the augmented prompt and generate the output, use the ML_RAG routine:

    Press CTRL+C to copy
    call sys.ML_RAG(@query,@output,@options);
  4. Print the output:

    Press CTRL+C to copy
    select JSON_PRETTY(@output);

    The output is similar to the following:

    Press CTRL+C to copy
    | { "text": " AutoML is a technology that automates the process of training a machine learning model. It replaces the laborious and time-consuming tasks of data analysts, such as selecting a model from a large number of viable candidate models, tuning hyperparameters, and selecting only predictive features to speed up the pipeline and reduce over-fitting.", "citations": [ { "segment": "3.1.2 HeatWave AutoML Ease of Use ......................................................................... 98\n3.1.3 HeatWave AutoML Workflow .............................................................................. 99\n3.1.4 Oracle AutoML ................................................................................................... 99\n3.2 Before You Begin ........................................................................................................ 100", "distance": 0.4287, "document_name": "" }, { "segment": "The HeatWave AutoML ML_TRAIN routine leverages Oracle AutoML technology to automate the process of training a machine learning model. Oracle AutoML replaces the laborious and time consuming tasks of the data analyst whose workflow is as follows:\n1. Selecting a model from a large number of viable candidate models.\n2. For each model, tuning hyperparameters.\n3. Selecting only predictive features to speed up the pipeline and reduce over-fitting.\n99", "distance": 0.4312, "document_name": "" }, { "segment": "3.1.3 HeatWave AutoML Workflow ...................................................................................... 99\n3.1.4 Oracle AutoML ........................................................................................................... 99\n3.2 Before You Begin ................................................................................................................ 100", "distance": 0.4355, "document_name": "" } ], "vector_store": [ "`demo_db`.`demo_embeddings`", "`demo_db`.`demo_minilm_table`" ], "retrieval_info": { "method": "n_citations", "threshold": 0.4355 } } |

    The vector_store section lists the names of the vector store table, demo_embeddings, and embedding table, demo_minilm_table that are used to retrieve context for generating the output.

Running Batch Queries

To run multiple RAG queries in parallel, use the ML_RAG_TABLE routine. This method is faster than running the ML_RAG routine multiple times.

To run batch queries using ML_RAG_TABLE, perform the following steps:

  1. To specify column names for the ML_RAG_TABLE routine to find relevant tables for context retrieval, set the routine options:

    Press CTRL+C to copy
    set @options = JSON_OBJECT("vector_store_columns", JSON_OBJECT("segment", "TextSegmentColumnName", "segment_embedding", "VectorEmbeddingColumnName"), "embed_model_id", "EmbeddingModelName", "model_options", JSON_OBJECT("language", "Language"));

    Replace the following:

    • TextSegmentColumnName: the name of the embedding table column that contains the text segments in natural language. If multiple tables contain a string column with the same name, they are all used for context retrieval. Default value is segment.

    • VectorEmbeddingColumnName: the name of the embedding table column that contains vector embeddings of the natural-language text segments. If multiple tables contain a vector column with the same name which contain embeddings from the specified embedding model, they are all used for context retrieval. Default value is segment_embedding.

    • EmbeddingModelName: the name of the embedding model to use to generate the vector embeddings for the input query. The routine uses this embedding model name to find tables generated using the same model for context retrieval. Default value is minilm if the output language is set to English and multilingual-e5-small if the output language is set to a language other than English.

    • Language: the two-letter ISO 639-1 code for the language you want to use for generating the output. The model_option option parameter language is required only if you want to use a language other than English. Default language is en, which is English. To view the list of supported languages, see Languages.

    For example:

    Press CTRL+C to copy
    set @options = JSON_OBJECT("vector_store_columns", JSON_OBJECT("segment", "demo_text", "segment_embedding", "demo_embedding"), "embed_model_id", "all_minilm_l12_v2", "model_options", JSON_OBJECT("language", "en"));

    In this example, only embedding tables containing a string column demo_text and a vector column demo_embeddings which contains vector embeddings from all_minilm_l12_v2 are used for context retrieval. Since the inbuilt vector store tables use predefined column names, if you change the column names to any value other than the default value, then the vector store tables are filtered out and are not used for context retrieval.

    To learn more about the available routine options, see ML_RAG_TABLE Syntax.

    Similarly, you can use the vector_store_columns parameter to specify the following column names for the routine to filter relevant tables for context retrieval:

    • document_name: name of a column containing the document names. This column can be of data type supported by HeatWave. Default value is document_name.

    • document_id: name of an integer column containing the document IDs. Default value is document_id.

    • metadata: name of a JSON column containing additional table metadata. Default value is metadata.

    • segment_number: name of an integer column containing the segment numbers. Default value is segment_number.

    Since these are optional columns, if these column values are not set, then the routine does not use these columns to filter tables.

    If you are using an embedding model that is not available in HeatWave, then you must also provide the vector embeddings of the input queries. You can specify name of the input table column that contains the vector embeddings of the input queries using the embed_column parameter. However, in this case, you cannot use the inbuilt vector store tables for context retrieval.

  2. In the ML_RAG_TABLE routine, specify the table columns containing the input queries and for storing the generated outputs:

    Press CTRL+C to copy
    call sys.ML_RAG_TABLE("InputDBName.InputTableName.InputColumn", "OutputDBName.OutputTableName.OutputColumn", @options);

    Replace the following:

    • InputDBName: the name of the database that contains the table column where your input queries are stored.

    • InputTableName: the name of the table that contains the column where your input queries are stored.

    • InputColumn: the name of the column that contains input queries.

    • OutputDBName: the name of the database that contains the table where you want to store the generated outputs. This can be the same as the input database.

    • OutputTableName: the name of the table where you want to create a new column to store the generated outputs. This can be the same as the input table. If the specified table doesn't exist, a new table is created.

    • OutputColumn: the name for the new column where you want to store the output generated for the input queries.

    For example:

    Press CTRL+C to copy
    call sys.ML_RAG_TABLE("demo_db.input_table.Input", "demo_db.output_table.Output", @options);