MySQL Blog Archive
For the latest blogs go to
Rethinking Result Sets in Connector/Node.js

It used to be the case where, in order to actually process data retrieved from the database using Connector/Node.js, you would have to resort to an API that required the use of both JavaScript callbacks and Promises. This was meant to provide more flexibility and control to the application developer and at the same time decrease the chance of buffering data unnecessarily. However this wasn’t useful for 99% of the use-cases and made simple tasks a little bit cumbersome. Also, the fact that it required using two different asynchronous constructs made it a little bit harder to grasp.

To make matters worse, in order to consume operational metadata about the columns in the result set, you would have to provide an additional callback, making the whole thing spiral a bit out of control, particularly when there were multiple result sets involved. In that case, you needed to create a shared context between the two functions in order to map data and metadata for each column in each result set.

Keep in mind that .execute() doesn’t return a promise, but rather receives a callback function to do your data processing of each individual row. This could be a bit annoying for you.

Gabriela Ferrara

Additionally, given the increasing pervasiveness of the MySQL Shell and the sheer number of examples and tutorials in the wild, some users found themselves shoehorning its synchronous JavaScript API in Node.js code which didn’t really work due to the asynchronous nature of the latter, leading to some confusion and a lot of annoyances.

This has changed with the release of Connector/Node.js 8.0.18 (in September 2019), wherein the Result instance, resolved by the Promise returned by the execute() method, includes a whole new set of utilities that allow you to consume the result set data in a way similar to a pull-based cursor. That cursor is implemented by the fetchOne() and fetchAll() methods, which are pretty much self-explanatory, but in essence, allow you to consume, from memory, either a single item or all the items from the result set. Calling these methods will consequently free the memory space allocated by the client code. This constitutes a good middle ground between a non-buffered approach like the one based on callbacks, or a fully-buffered approach where the result set is kept in memory unless the application explicitly clears it via an additional API.

There are now three different kinds of Results with a contextual interface that varies depending on whether you are fetching documents from a collection (DocResult), rows from a table (RowResult) or raw data via SQL (SqlResult). This is exactly how it is specified by the X DevAPI Result Set standard. The only difference is that, in Connector/Node.js, the execute() method is asynchronous, so you access the Result instance by handling the resulting Promise instead of it being directly returned when calling the method (like in other existing implementations).

In the same way, as described by the standard, for the and Session.sql() APIs, besides the fetchOne() and fetchAll() methods, these interfaces are now providing additional methods such as getColumns() to process column metadata and nextResult() to iterate over multiple result sets.

So, looking back at the use cases analyzed before, this is how you can, at the moment, start working even more effectively with result sets using Connector/Node.js.

Document Store

With the same myCollection collection under the mySchema schema which contains the following:

  "_id": "1",
  "name": "foo"
}, {
  "_id": "2",
  "name": "bar"

You can now retrieve all documents in the collection with:

const collection = session.getSchema('mySchema').getCollection('myCollection');

const result = await collection.find().execute();
console.log(result.fetchAll()); // [{ _id: '1', name: 'foo' }, { _id: '2', name: 'bar' }]

// alternatively, fetching one document at a time
const result = await collection.find().execute();
const docs = [];

while (doc = result.fetchOne()) {

console.log(docs); // [{ _id: '1', name: 'foo' }, { _id: '2', name: 'bar' }]

Regular Tables

Working with tables means that besides column values, you can also access specific details about the column, such as its name, type, size, encoding, etc. Processing column metadata becomes a lot less confusing using the getColumns() method. So, with a myTable table under the same schema, which contains the following:

| _id | name  |
| "1" | "foo" |
| "2" | "bar" |

You can, similarly to the document mode counterpart, retrieve all the rows from the table with:

const table = session.getSchema('mySchema').getTable('myTable');

const result = await;
console.log(result.fetchAll()); // [['1', 'foo'], ['2', 'bar']]

// alternatively, fetching one row at a time
const result = await;
const rows = [];

while (row = result.fetchOne()) {

console.log(rows); // [['1', 'foo'], ['2', 'bar']]

And you can retrieve details about each row in the table with:

const columns = result.getColumns();

const names = => c.getColumnName());
console.log(names); // ['_id', 'name']

const charsets = => c.getCharacterSetName());
console.log(charsets); // ['utf8mb4', 'utf8mb4']

const collations = => c.getCollationName());
console.log(collations); // ['utf8mb4_0900_ai_ci', 'utf8mb4_0900_ai_ci']

Creating an object mapping each column name to its value (similar to result set items in document mode) is now as easy as:

// the column "label" accounts for aliases
const mapping = res.fetchAll()
  .map(row => {
    return row.reduce((res, value, i) => {
      return Object.assign({}, res, { [columns[i].getColumnLabel()]: value })
    }, {});

console.log(mapping); // [{ _id: '2', name: 'bar' }, { _id: '1', name: 'foo' }]


As already mentioned, one of the biggest advantages of this new API is that it also condenses the process for working with multiple result sets. So, with a table like the one used before and a PROCEDURE such as:

  SELECT _id AS s1_c1, name AS s1_c2 FROM myTable;
  SELECT '3' as s2_c1, 'baz' AS s2_c2;

You can easily iterate over all the result sets, without keeping any kind of shared state, like the following:

const rows = [];
const columns = [];

const res = await session.sql('CALL proc').execute();

do {
  columns.push(res.getColumns().map(c => c.getColumnLabel()));
} while (res.nextResult() && res.hasData());

console.log(rows); // [[['1', 'foo'], ['2', 'bar']], [['3', 'baz']]]
console.log(columns); // [['s1_c1', 's1_c2'], ['s2_c1', 's2_c2']]

In Retrospect

This new result set API closes a huge gap with regards to X DevAPI platform compatibility and brings all the implementations closer to a point of becoming almost drop-in replacements for each other. In particular it capitalizes on the success of the MySQL Shell and introduces syntax way more similar to the one used by its JavaScript implementation, and provides a better framework for developers switching between the two environments. We believe it also leads to more readable and maintainable code while making a good compromise in terms of resource requirements, in particular with regards to memory usage.

Make sure you give it a try and let us know what you think about it. Report any issues you have via our bug tracker using the Connector for Node.js category or go one step further and submit a pull request.

If you want to learn more about Connector/Node.js and the X DevAPI, please check the following:

Make sure you also join our community Slack and come hang around at the #connectors channel: