WL#4533: QFPD: Abstract Query tree (AQT)

Affects: Server-7.x   —   Status: Un-Assigned

One of the main building blocks of the the query fragment pushdown
functionality is the ability of the query engine and the storage engines to
negotiate which operations in a query can be computed by the storage engine,
and which ones need to be computed by the MySQL server.

Since we agree that it is impossible to model all possible capabilities of each
storage engine, we base our approach on a request-reply paradigm. With this
approach the server sends to the storage engine a query fragment and asks the
storage engine to communicate back which operations it can/cannot execute.

This approach requires that each storage engine is capable of exploring a
query fragment. For a number of good reasons (omitted here), the architecture
board agreed that we should not let storage engine implementors directly
navigate our internal query representation. Instead we should provide an
abstract interface to our internal query representation, such that it is:
- simple to understand and explore,
- expressive so that in principle it can cover all of SQL and possibly
  other languages as XQuery,
- stable with respect to changes in our internal representation,
- extensible so that it can cope with change.

The goal of this WL is to define such an interface.
================================================================================
HLS for WL#4533: Abstract Query Tree (AQT)
================================================================================

!!! THIS IS A WORKING DRAFT DOCUMENT !!!

--------------------------------------------------------------------------------
Table of Contents
--------------------------------------------------------------------------------

1. MySQL internal query representations at different stages of query processing
   1.1 AST internal query representation
   1.2 Single-table access query representation
   1.3 QEP internal query representation
2. Design ideas behind the AQT metamodel
   2.1 Types of objects in the AQT meta-model
   2.2 Organization of the meta-objects in a query structure
3. The AQT meta-model of SQL
   3.1 Base classes
   3.2 Expressions
   3.3 Queries
   3.4 Database objects
   3.5 Data types
   3.6 Meta types
4. Example storage engine descriptions
   3.1 The current NDB pushdown capabilities
   3.2 Future NDB pushdown capabilities (with join)
   3.3 Google engine
   3.4 Falcon
   3.5 Other?
5. Related work
6. Future work
7. High-level plan
8. Detailed plan

--------------------------------------------------------------------------------
1. MySQL internal query representations at different stages of query processing
--------------------------------------------------------------------------------
Depending on the query processing phase and the purpose of use, the MySQL
server employs two internal representations of SQL queries. Initially the
parse phase creates an Abstract Syntax Tree (AST). The subsequent processing
phases gradually transform the AST, and create and attach to it additional data
structures that provide more details about how to execute a query. Finally
the cost-based join optimization phase creates a complete Query Execution Plan
(QEP) structure that fully specifies the join ordering, access methods and
conditions to be executed with each join and access method.

Unlike other DBMS systems, where the AST and the QEP are completely separate
(and different) data structures, in MySQL the AST and the QEP representations
share a lot of common data structures. For instance, a QEP doesn't have a
separate representation of conditions and value expressions - instead QEPs
reuse the conditions and value expressions of the AST representation (Items).

There are several important query processing phases where the server operates
on different internal query representations:
a) During any phase after parsing there is an AST internal representation.
   There are several points where the AST has relatively well defined meaning:
   - after the name resoluton/access checking process, and
   - after the logical rewrite phase.
b) During the preparation phase for cost-based optimization (ref and range
   optimization), which take into account only single-table access methods,
   each table access method analyzed by the optimizer can be viewed as a simple
   query fragment over one table.
   TODO: Clarify the internal representation for each different type of
         optimization.
c) During the cost-based join optimization:
   - Each partial QEP and each join subsequence with its pushdown conditions
     and variable bindings can be viewed as a query fragment. Notice that the
     current MySQL doesn't compute yet pushdown conditions for partial plans
     (see WL#4536).
   - Each complete QEP (including the final chosen one) generated by the join
     optimizer can be viewed as a different physical representation of the same
     logical query. Therefore if the user of an AQT needs to see a query
     fragment as a query plan, different complete QEPs represent different
     fragments, otherwise, all complete QEPs should represent the same query.
d) Finally the plan refinement phase refines the chosen optimal plan and
   creates an executable form of the QEP. This form differs slightly from the
   QEP representation used during join optimization.

Since the QEP form is targeted for execution, it provides a normalized, simpler
more regular form of the initial query and thus it should be easier to analyze.
On the other hand, the AST representation allows one to analyze a complete query
prior to the cost-based optimization phase.

The remainder of this section describes in more detail the four query
representations and the data structures they use.


1.1 AST internal query representation
-------------------------------------

* Main structures/classes:
  - Overal query/subquery structure: SELECT_LEX classes
  - Expressions, conditions: Item classes
  - Table references: the TABLE_LIST structure

TODO:
Cover in more detail:
  - Top-level SELECT/FROM/WHERE structure,
  - FROM clause nested structure (list of trees),
  - WHERE clause item tree structure,
  - Subquery dual hierarchy - there are two ways to follow the subquery
    hierarchy:
    - via the Item hierarchy
    - via the SELECT_LEX hierarchy
  - GROUP BY, HAVING, ORDER BY


1.2 Single-table access query representation
--------------------------------------------

The first phase of the cost-based query optimizer analyzes and pre-selects the
best access methods for each table separately of other tables. Later, during the
join optimization phase, the information from this phase is used during join
ordering. In order to utilize the capabilities of various storage engines,
this pre-join optimization phase must also perform pushdown analysis, so that
in the end it selects the best access method by comparing the ones provided
by MySQL with the ones that storage engines are capable of internally. One
example is the Falcon engine that could benefit from its efficient bitmap index
intersection.

Since the pre-join optimization phase considers only single-table access
methods, then the query fragments that represent such accesses are simple
single table queries of the form:

SELECT *
FROM cur_table
WHERE range_or_ref_cond(cur_table);

TODO: 
Consider in WL#4536 which of the following sub-phases would benefit from
pushdown:
* ref optimizer
* range optimizer
* constant table optimizer

1.3 QEP internal query representation
-------------------------------------

WL#2241 already contains a detailed description of the QEP structure. Here we
provide additional notes on how this structure can be interpreted as a query.

Currently the QEP query representation is a left-deep tree of join and data
access operations, called Plan OPerations (POPs). In the near future this
representation will be extended to support bushy plans. Each operation has
attached to it some filter conditions. The conditions are shared between the
QEP and the AST representation, however the representation of the operations is
specific only for the cost-based optimization phase.

A query (subquery) in an executable form is represented by class JOIN. However,
depending on the exact optimization phase, the current representation of QEPs
has two sub-forms:
a) During the process of join optimization a QEP is represented as an array of
   POSITION structures, where each structure contains information needed for
   optimization, and contains pointers to the actual plan operator (a JOIN_TAB
   instance).
b) The final QEP is represented as an array of JOIN_TAB structures, plus
   additional information about GROUP/ORDER BY operations.

Currently the pushdown conditions attached to each POP are known only after the
optimizer chooses the final QEP, however, as WL#4536 explains, this will be
modified so that it is possible to derive the pushdown conditions for any
partial QEP. Therefore here we will assume that this is the case.

Any partial or complete join plan with its pushdown conditions is executable,
therefore it is possible to represent a QEP as a join query without a WHERE
clause, where each pushdown condition is attached to its join via a
JOIN ... ON clause. From a declarative point of view many partial plans that
reference the same tables may be semantially equivalent, but may have
different execution cost (and different physical execution methods). This
JOIN plan representation allows both interpretations of a QEP.

1.3.1 Cost based join optimization (partial query plans)
--------------------------------------------------------
Since all conditions of the WHERE clause are pushed (and attached) to some plan
operator, and since the optimizer doesn't consider pushing down GROUP and ORDER
operations, QEPs during cost-based join optimization can be represented with
queries with only two clauses: SELECT, and FROM.

SELECT 
FROM ;

TODO: Example.


1.3.2 Plan refinement phase (complete query plan)
-------------------------------------------------

The representation of this phase extends the previous one with additional
GROUP/ORDER BY and materialization operations, if these operations have not
been optimized away.

The overal form of the queries is the same as above, with additional
GROUP/ORDER BY clauses, with the addition of the SELECT list that has been
previously optimized via simplification and constant substitution.

SELECT 
FROM 
[GROUP BY  HAVING ]
[ORDER BY ]

TODO: Example.


1.4 Existing methods for iteration over AST related structures
--------------------------------------------------------------

For instance for Item and st_select_lex_unit we have:
- Item::walk
- Item::traverse
- Item::compile
- Item::print, st_select_lex_unit::print
- st_select_lex_unit::[master_unit, next_unit]

TODO: others?

Knowing these methods will help us to figure how to iterate over the whole AST.

NOTE:
The ::print() methods recursively follow the complete query structure,
therefore they might be the best way to analyze the current internal
query representation.


--------------------------------------------------------------------------------
2. Design ideas behind the AQT metamodel
--------------------------------------------------------------------------------

In order for storage engines to be able to analyze SQL queries and query plans,
the server must expose some model for queries and query plans. Since SQL itself
provides a model for data, the model for the concepts of SQL is a meta-model.
Examples of such meta-models are the QGM model adopted in DB2 [1], and the SQL
Query model adopted by the OMG [2] and the Eclipse DTP project [3].

Of these three meta-models, the SQLQueryModel by the Eclipse DTP project [3] is
very close to MySQL's AST representation. However the SQLQueryModel is designed
for parsing, and contains too many details and levels of indirection, therefore
we consider it to be too general and complex for our purpose. Therefore in this
task we will use the SQLQueryModel as in inspiration rather than as a possible
standard. After prototyping in the future we may implement the SQLQueryModel in
full if needed.

The remainder of this section presents an outline of the meta-model that the
MySQL server will export to storage engines. Based on the tree-like structure
of MySQL's internal query representation, we will call our meta-model the
Abstract Query Tree (AQT) meta-model.

An Abstract Query Tree (AQT) is a view over some internal representation of SQL
queries (or query fragments) inside the MySQL server, that exposes in a uniform
way the structure and properties of the corresponding query. The main purpose
of an AQT is to abstract implementation details of each of the different
internal query representations and to provide a uniform, simple and time/space
efficient way to analyze query fragments. Another purpose of an AQT is to hide
implementation details of the different internal query representations. For
instance the AQT should hide details such as the existence of Item_in_optimizer
wrappers for Item_in_subselect that represent IN predicates. Similarly the AQT
should omit the artificial fake_select_lex objects as separate nodes and
represent their information as part of the containing select unit.

2.1 Types of objects in the AQT meta-model
------------------------------------------

In general a query is an expression over a set of tables that produces a single
table. Queries consist of the following main kinds of objects:
* table references and tables
  We use the term "table" in its most general sense - as a "relation with
  duplicates" (or a "bag"), independent of how the extent of a table is
  computed. There are various kinds of tables:
  - base tables,
  - joins (inner, outer, natural, using, on),
  - derived tables (views, queries - tables derived by queries themselves),
  - table functions (tables derived by external procedural computations),
* column references and columns
* value expressions
* conditions
* databases (schemas)
* queries and subqueries

Instances of these objects are linked in various ways, and have various
properties described later in this section.

2.2 Organization of the meta-objects in a query structure
---------------------------------------------------------

2.2.1 Tree-like structure of AQTs

SQL queries in general may reference the same object (column, (sub)expression,
table) in multiple places. In principle this would allow one to treat queries
as Directed Acyclic Graphs (DAG). Currently MySQL does not discover common
subexpressions, which would be the main reason to look at queries as DAGs,
therefore we limit ourselves to a tree model. However the AQT interface must
provide the means for its user to be able to identify common elements.

2.2.2 Queries as sets of inter-related trees

In principle it is possible to model a query as a single tree, however we
consider this to be unnatural because queries consist of two orthogonal kinds
of trees:
  * table trees, and
  * expression trees.
If queries are modeled by a single tree, no matter which one is the primary
tree (a table or an expression tree), at some point the child of a node of one
type of tree will end up being a node of the other type of tree. For instance,
the child of a table would be an expression, or the child of an expression
would be a column of a table. With such non-homogeneous trees it would hardly
be possible to build any reasonable procedure that processes all nodes in a
polymorphic manner (except a recursive 'print' procedure that would make sense
for arbitrary nodes).

Since queries are expressions over tables that produce derived tables, we model
queries as trees of tables, where each table node may reference its own
expression tree as a property of the node. Such table trees can be interpreted
in two different ways:
a) as a declarative specification of the final result table whose result is
   specified in terms of other tables and conditions on those tables, or
b) as an operator tree of (logical) table access methods and joins that fetch
   data that satisfies the conditions attached to each table node.


3. The AQT meta-model of SQL
----------------------------

Design approach:
* The goal of this design is to indentify the main classes, and their
  relationships, but *not* to provide detailed description of all possible
  members/methods.
* The meta-model is described via C++ syntax which is used only as a textual
  representation for OO design, and not as a suggestion how to implement the
  design. If needed, this model can be expressed via UML diagrams.
* Attributes vs. operations - only properties that require non-trivial
  computations are denoted as operations. All other class properties are
  denoted as class attributes, however this doesn't imply anything about
  the implementation. In fact, the implementation of this meta-model will most
  likely implement all properties via get/set methods.
* Unlike the SQL standard, we will not make a difference between type and
  domain, unless it becomes obvious this is needed.
* Wherever it makes sense, we will use the class names from the SQL'2003
  standard and the Eclipse SQLModel and SQLQueryModel meta-models.
* Inverse relationships have the "inv_" prefix.

/*******************************************************************************
3.1  Base classes
*******************************************************************************/

/**
  The base class for all objects in this API.
*/

class SQL_object
{
  /* The type of the object itself, not to be confused with data types. */
  Type meta_type;

  /* Unique ID within some space, depending on the meta_type. */
  int id;

  /* The name of the object. (Almost?) any SQL object has a name. */
  String name;

  /*
    A generalization of the various ways to introduce an alternative name
    for an object in some scope (e.g. connection or query). For instance, for
    columns the ANSI SQL term is 'derived column', for table references the
    analogous term is 'corellation name'.
  */
  String alias;

  /*
    Any SQL object may have a comment attached to it. In particular comments
    may be useful to pass information directly to a storage engine, or to
    attach hints in some hint-language separate from SQL.
  */
  String comment;
};


/**
  Base class for all objects that have a textual representation.
  If an object is a database object, then its textual representation is its
  CREATE statement, if the object is a query, then this is the text of the
  query. If an object contributes to the textual representation of another
  object (e.g. a column is part of table), then get_sql() returns its
  corresponding textual representation.

  TODO: Doesn't it make sense to merge this class with SQL_object, and
  assume that for objects with no string representation get_sql() is empty
  (or NULL)?
*/

class SQL_query_object
{
  /* The SQL textual representation of an object. */
  String get_sql();
};


/*******************************************************************************
3.2  Expressions
*******************************************************************************/

/**
  Base class for all expressions that return both scalar or a row value.
  This class also models "derived columns" via the inherited property
  SQL_object::alias.
*/

class Value_expression : SQL_query_object
{
  /* The SQL data type of the value. */
  Data_type data_type; 
};


/**
  Expressions whose value is directly retrieved from a table reference column.
  Corresponds to the  concept in the SQL standard.
*/

class Value_expression_column : Value_expression
{
  Column column;
};


/**
  Base class for all boolean-valued expressions - predicates, search conditions,
  and boolean value conditions. Unlike the standard, here we unite
   and  into one class.
  TODO:
  - No useful properties so far?
  - Most likely some base methods will be overriden, so we still need it.
*/
class Value_expression_boolean : Value_expression
{
  ??? ???;
};


class Value_expression_boolean_combined : Value_expression_boolean
{
  Value_expression_boolean left_expression;
  Value_expression_boolean right_expression;
  Boolean_operator         boolean_operator;
};

enum Boolean_operator {OR, AND};


/*******************************************************************************
3.3 Queries
*******************************************************************************/

/**
  A reference to any table-like object in a query. A table reference is like
  a higher-order variable that ranges over the set of tables in a database.
  Unlike base tables, which exist permanently in a database, table references
  exist only within the scope of a query.

  Table references may have a  defined with the
  [AS]  clause. This concept is already modeled via the
  base class property SQL_object::alias.
*/

class Table_reference : SQL_query_object
{
  List derived_column_list;
};


/**
  Table reference to a table defined in a database - base table or view.
  Notice:
  In the case of views, a Table_in_database may be transformed into a
  Derived_table.
*/

class Table_in_database : Table_reference
{
  Table database_table;
};


/**
  Table reference to the result of a JOIN. Unlike derived and stored
  tables, the result table of a JOIN cannot have a table correlation.
  Instead of creating one extra level of inheritance, this class must
  satisfy the invariant "alias == NULL".
*/

class Table_joined : Table_reference
{
  Table_reference          table_ref_left;
  Table_reference          table_ref_right;
  Value_expression_boolean join_condition;
  Join_operator            join_operator;
};

enum Join_operator {
  DEFAULT_INNER /* comma */, EXPLICIT_INNER, LEFT_OUTER, NATURAL, USING
};

/**
  A query expression specifies the derivation of table from other tables.
  According to SQL'2003, Part 2, 4.14.2:
  "A derived table is a table derived directly or indirectly from one or more
  other tables by the evaluation of a  whose result has an
  element type that is a row type. The values of a derived table are
  derived from the values of the underlying tables when the
   is evaluated."
*/

class Query_expression : Table_reference
{
  Into_specification into_clause; /* TODO: needs more work. */
  With_specification with_clause; /* TODO: needs more work. */
  /* TODO: there is definitely more */
};


/**
  Represent UNION, INTERSECT, EXCEPT queries.
  TODO: consider merging this class with its base class Query_expression.
*/

class Query_expression_combined : Query_expression
{
  Query_expression query_left;
  Query_expression query_right;
  Query_combined_operator combined_operator;
};

enum Query_combined_operator
  {UNION, UNION_ALL, INTERSECT, INTERSECT_ALL, EXCEPT, EXCEPT_ALL};

/**
  Represents a subselect (a single SELECT operation without nesting).
*/

class Query_select : Query_expression
{
  List   select_clause;

  /* Notice: Table_references also represent lists via special 'comma' operator. */
  Table_reference          from_clause;

  Value_expression_boolean where_clause;

  List   group_by_clause;

  Value_expression_boolean having_clause;

  /* Value_expressions also contain order specification. */
  List   order_by_clause;

  /* MySQL-specific clause with special structure. */
  Limit_specification      limit_clause;

  bool             isDistinct;
};

class Limit_specification : SQL_query_object
{
  int lower_bound;
  int upper_bound;
};


/*******************************************************************************
3.4 Database objects
*******************************************************************************/

/**
  Base abstract class for all data elements with a type (columns, fields,
  attributes, and parameters).
*/

class Typed_element : SQL_object
{
  /*
    The ordinal position within the table/row type/structured type that contains
    this element.
  */
  int       position;

  /* The SQL data type of the field. */
  Data_type data_type;
};

class Column : Typed_element
{
  boolean    nullable;
  String     default_value;
  Table      table;
  Set indexes();
};

class Field : Typed_element
{
  /* TODO: not important now */
};

class Parameter : Typed_element
{
  /* TODO: not important now */
};

class Table : SQL_query_object
{
  List columns;
  Database     database; /* Should we support Schema and how? */
  Set   indexes;  /* TODO: do indexes make sense for derived tables? */
};

/**
  Unnamed derived tables.
*/

class Derived_table : Table
{
  Query_expression query;
};


class Vew : Derived_table
{
  Algorithm    algorithm;
  Check_option check_option;
  /* TODO: what else? */
};

class Base_table : Table
{
  /* TODO */
};

class Temporary_table : Base_table
{
  /* TODO */
};

class Persistent_table : Base_table
{
  /* TODO */
};

class Index : SQL_query_object
{
  List parts;
  boolean          clustered;
  boolean          unique;
  Table            inv_table; /* Inverse of Table::indexes */
  /* TODO: other useful properties here. */
};


class Index_part : SQL_query_object
{
  Column column;
  Index  inv_index; /* Inverse of Index::parts */
  /* TODO: other useful properties here. */
};


/*******************************************************************************
3.5 Data types
*******************************************************************************/

/*
  The types in this section model the type system of SQL.
*/

class Data_type : SQL_query_object
{
  enum_field_types mysql_field_type;
  /* TODO */
};

/* Data types defined by the SQL standard. */

class SQL_data_type : Data_type
{
  /* TODO */
};

class Predefined_data_type : SQL_data_type
{
  Primitive_data_type primitive_type;
};

enum Primitive_data_type {
 BIGINT, BINARY, BINARY_LARGE_OBJECT, BINARY_VARYING, BOOLEAN, CHARACTER,
 ..., DATE, FLOAT, INTEGER, ...
};

class Constructed_data_type : SQL_data_type
{
};

class Row_data_type : Constructed_data_type
{
  List fields;
};

Collection_data_type : Constructed_data_type
{
  boolean is_ordered; /* If true - 'array', if false - 'multiset' (bag). */
  /* TODO: not needed yet */
};

class User_defined_type : Data_type
{
  /* TODO: not needed yet */
};



/*******************************************************************************
3.6 Meta types
*******************************************************************************/

/*
  The classes in this section represent a type system for the meta-classes in
  the AQT meta model itself. Since all instances of a meta-class have one and
  the same type at a time, all the meta type classes are singleton classes.

  The current nearest concept in MySQL are the contants in enum Item::Type,
  returned by the Item::type() method. Other class hiearchies also have
  similar type-descriptive enums.

  Here any meta-class has a corresponding type object, and all type objects
  have the same interface and representation.

  TODO: Decide what is the scope of this singleton classes - a server instance,
        a connection, a query? This decision may be resolved in the LLD.

  TODO: verify this:
  Typically there is one meta-type class for each class in the model, however
  in some cases the same AQT model class may have several corresponding types
  that further refine the classification of AQT class instances in some way.
*/

class Type : SQL_query_object
{
  /* Notice that the name and other important properties are inherited. */
  private SQL_object current_object;
  Multiset extent(); /* All existing objects of this type. */
};

class Type_object : Type
{
};

class SQL_query_object_type : Type
{
};

class Value_expression_type : Type
{
};

class Value_expression_column_type : Value_expression_type
{
};

class Value_expression_boolean_type : Value_expression_type
{
};

.........


class Table_reference_type : Type
{
};



--------------------------------------------------------------------------------
4. Example storage engine descriptions
--------------------------------------------------------------------------------

4.1 The current NDB pushdown capabilities

There exists some limited amount of pushdown capability as of now. The NDB
engine is capable of handling pushdown conditions of very simple nature like the
query below where both a and b are fields of same ndb table t : 
 
    select * from t where a > 2 and b > 3;

The condition could be a composite condition, however each component of
condition is expected to be fairly simple to be considered for pushdown support.
Conditions like following are not supported by ndb as of now:
    a > b    // involves more than one field
    mod(a,10) > 3
    a+1 > 10

The handler infrastructure for this push down condition is provided by handler
class COND *cond_push(const COND *cond) function which is implemented by NDB
engine. The COND structure is same as Item structure. 
   
The handler function ha_ndbcluster::cond_push(COND *cond) which implements the
current pushdown method is fairly complicated (850+ lines of code) though the
problem specification may sound simple. Here is a simplified *pseudo* code of
current NDB implementation (omitting much of lower level details) :

   // Return NULL on successful push else return rejected condition
   COND * ha_ndbcluster::cond_push(COND *cond)
   {
      context = Initialize a dynamic context object which specifies
                things like 
                (1) what Item Types are expected e.g. COND_ITEM, FUNC_ITEM
                (2) what result types are expected e.g. INT_RESULT,REAL_RESULT
                etc.
      item->traverse_cond(&ndb_serialize_cond, 
                 (void *) &context, Item::PREFIX);

      if (context.supported) return NULL  
      else  return  cond;

   }

   The Item class provides prefix or postfix method of traversal function. Each
Item could be of different types like "and"/"or" condition functions or field
item, etc. Hence the number of operands for an item depends on its type. The
dynamic polymorphism of item tree is currently defined by vast number of Item
subclasses which are defined in multiple hierarchies. 

  Example inheritance graph: Item_cond_or ==> Item_cond ==> Item

  Item_cond contains { List list;} to store it's operands (mostly 2
operands for conditions like "and"/"or"/>/< ).

The individual item classes override the functions like traverse_cond() and
print() to properly iterate over it's operands.

Note: cond_push() can be called multiple times. Each successful push implies
additional filter implementing "and" logic. 
  
The current interface for pushdown exposes Item internal details for the storage
engine. Some amount of abstraction here and supported well defined documentation
here is needed. Also the current pushdown support is limited to single table
filters only as it is based on table handler interface.

4.2 Future NDB pushdown capabilities (with join)

4.3 Google engine

4.4 Falcon

4.5 Other?

--------------------------------------------------------------------------------
5. Related work
--------------------------------------------------------------------------------
[1] The Query Graph Model (QGM) developed by IBM in Starburst, now in DB2:
    "Extensible/rule based query rewrite optimization in Starburst"
    by Hamid Pirahesh, Joseph M. Hellerstein, Waqar Hasan, SIGMOD 1992
[2] OMG's Information Management Metamodel (IMM)
    - http://www.omgwiki.org/imm/doku.php
    - www.omg.org/docs/ptc/07-08-30.odt
[3] Eclipse SQLMetamodel:
    http://www.eclipse.org/datatools/project_modelbase/,
    http://wiki.eclipse.org/MDT-IMM-Proposal
    - SQLQueryModel:
     
http://www.eclipse.org/datatools/project_modelbase/modelbase_doc/SQL_SQLQuery_ModelWebPub/root.html


    - SQL metamodel as UML diagram:
     
http://www.eclipse.org/datatools/project_modelbase/modelbase_doc/SQLModelWebPub/root.html
 

[4] "System Prototype and Verification Using Metamodel-Based Transformations"
   
http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index.jsp?&pName=dso_level1&path=dsonline/2007/04&file=o4001.xml&xsl=article.xsl&;jsessionid=LkcJfl54fCp2NfbSZVZtZHLQFpKhGWXvTFjpJ0j4nTtGVYGhtTfz!1912462927

[5] SQL:2003 Part 2, Foundation,
    * Sect. 4.14.6 Operations involving tables

--------------------------------------------------------------------------------
6. Future work
--------------------------------------------------------------------------------

6.1 Cache the pushdownability of each operation per SE. One way to do it
    is to implement a table with the schema:
    
    - What is the representation of a SE?

--------------------------------------------------------------------------------
7. High-level Plan
--------------------------------------------------------------------------------

01. Initial design.

02. Implement an abstract AQT interface.
    * tree walkers / iterators for table references/expressions
    * table reference/expression adapter classes
    * meta-type system
    * query metamodel class

03. Implement the tree walker/iterator part of the AQT interface
    for optimized QEPs.
    * tree walkers / iterators for table references/expressions
    * simple printing of table ref/expression trees
    * meta-type system
    * simple printing of table ref/expression trees

04. Implement the adapter part of the AQT interface
    for optimized QEPs.
    * table reference adapter classes
    * expression adapter classes
    * object pool for adapter classes
    * simple printing in a tree form of all properties of trees of
      adapter objects
    NOTE: there is a chance this phase can be skipped for NDB, needs
          investigation

05. Implement a unit test for the AQT query metamodel and fix
    the problems it finds.
06. Implement a prototype that prints optimized QEPs as queries as a
    simple example of how to use the AQT API.

=> Ready to begin work on NDB join pushdown prototype.

06. Design update / re-design based on prototype and feedback.
07. Change the AQT implementation based on the re-design

08. Implement AQT for partial QEP internal query representation.

=> Ready to begin work on query optimization/decomposition.

09. Implement AQT for single-table access method internal query representation.

=> Ready to do pushdown for Falcon-specific index access methods.

10. Implement AQT for AST internal query representation.

=> Ready to do pushdown for SEs that want to skip our optimizer.

11. Design update / re-design based on prototype and feedback.
12. Extend the unit tests.


=> Ready to use each of the AQT implementations in pushdown prototypes.

=> Ready to begin production-level design/implementation of AQT.

--------------------------------------------------------------------------------
8. Detailed Plan
--------------------------------------------------------------------------------

1. Initial design (7.1)
-----------------------

1.1 Find and study existing similar SQL meta-models and APIs
- OMG's Information Management Metamodel (IMM)
- Eclipse's SQLMetamodel
- IBM Query Graph Model (QGM)
- IBM Information Integrator Wrapper Development C++ API
Estimate: done (100%)

1.2 Design a SQL Query meta-model for MySQL (in UML)
- meta-model of expressions
- meta-model of table references
Estimate: 2009-03-22 (60%)
  - done with Expressions, more work on Table references
  - need more work to refine some issues

1.3 Write an initial version of HLS and LLD
Estimate: done (100%)

1.4 Split the task into two subtasks (involves some redesign/rewriting)
- Query meta-model
- Database meta-model (WL#4838 assigned to Thava)
Estimate: 2009-03-31 (10%)

Average progress: 67.5%


2. Prototyping - AQT for optimized query execution plans (7.02, 7.03)
---------------------------------------------------------------------

Implement base abstract AQT classes, and implement them for optimized QEPs
(JOINs and JOIN_TABs).

2.1 Base abstract classes
Estimate: done (100%)

2.2 Meta-type system
Estimate: 2009-04-05 (50%)
  - metatypes implemented, but
  - not yet used by AQT classes

2.3 Table reference tree cursors/iterators

2.3.1 Wrap left-deep tree representation
Estimate: done (100%)

2.3.2 Extend with bushy plans resulting from semi-join execution
Estimate: ??? (0%)
- this will most likely not be part of the NDB prototype

2.4 Expression tree cursors/iterators
Estimate: 2009-03-26 (80%)

2.5 Arrange code in physical modules
Arrange the code as specified by Mats in WL#4739: Physical Structure of Server.
Estimate: 2009-03-27 (11%)
  - Discussed with Mats, figured where to start

2.6 Main query class (factory for 2.1, and 2.2)
Estimate: 2009-04-05 (0%)

2.7 Prototype that prints a query via its AQT
Estimate: 2009-04-12 (20%)

2.8 Table reference proxy classes
Estimate: 2009-04-19 (0%)

2.9 Expression proxy classes
Estimate: 2009-04-19 (0%)

2.10 Analyze and add the most obviously needed properties/operations to
     Expression/Item tree iterators, and to their proxy classes.
Estimate: 2009-04-26 (0%)

2.11 Extend the prototype to print a query and information available via all
     new classes and properties
Estimate: 2009-04-26 (0%)

2.12 Update the WL to reflect the prototype.
Estimate: 2009-05-10 (0%)

Average progress: 28.6%

            
                                    
================================================================================
LLD for WL#4533: Abstract Query Tree (AQT)
================================================================================

!!! THIS IS A WORKING DRAFT DOCUMENT !!!

--------------------------------------------------------------------------------
Table of Contents
--------------------------------------------------------------------------------

1. Implementation alternatives for the AQT metamodel
   1.1 Implementation approaches - "shadow structure" vs "pure iterator"
   1.2 Implementation of the pure iterator approach
       1.2.1 Iterator with dynamic properties
       1.2.2 Multiple-inheritance with up/down-casting
2. Application of the approaches 1.2.[1,2] to the engines described in HLS.3.
3. handler.h changes overview

--------------------------------------------------------------------------------
1. Implementation alternatives for the SQL metamodel
--------------------------------------------------------------------------------

1.1 Implementation approaches - "shadow structure" vs "pure iterator"
---------------------------------------------------------------------

There are two general approaches to create an abstract model for the internal
representation.

The first approach, called "pure iterator", is to provide a public view onto
the internal AST through iterators. With this approach the public AQT model
reflects more or less exactly the AST structure. This mapping may possibly omit
various details - both in the structure of the AST and in the contents of each
AST node.  With this approach there is no explicit data structure that
represents the AQT.  A user of the AQT may iterate through all nodes of the AQT
and e.g. construct its own representation. If we make an analogy between an AST
and the data in some database, then with this approach an AQT is analogous to a
database view.

The second approach, called "shadow representation", is to create an explicit
query representation, such that there is one instance of it for each instance
of an internal query representation. This shadow representation may have the
most suitable abstract model that differs from the internal model. In the
analogy with databases, this approach is similar to creating a materialized
view or to create some new database derived from the data in another database,
and then wrap the access to the new database with views.

There may be other approaches, but based on all prior discussions we focus
only on these two. Let's analyze their pros and cons.

a) Pure iterator approach:
   + the AQT is always consistent with the internal representation thus usable
     at during all query processing phases,
   + possibly simpler to implement - no need to design a new representation for
     queries,
   + less memory overhead (only the iterator state),
   + possibly more efficient,
   - restricts the freedom to choose a better AQT model,
   - possibly less efficient to navigate than a well designed explicit AQT,
   - possibly complex implementation due to the complexity and heterogeneity of
     the internal representation.
b) Shadow representation exposed through an iterator interface:
   + full freedom in the definition of a new AQT model,
   + once instantiated, possibly more efficient to iterate,
   + allows at some point to transition from the current internal structure
     to the new one inside the server,
   + may provide a clean query representation to engines that lack one,
   - hard to keep consistent with the internal AST representation,
   - may take too long to design,
   - still needs to solve the same problem as the pure iterator approach -
     how to iterate over all nodes of the internal AST,
   - may be hard to map the internal AST to the AQT,
   - time overhead to create the AQT,
   - memory overhead to store the AQT.

Notice that here we assume that we will not need to share query representations
across clients. If this is the case, then no two threads will ever access the
same internal representation, and there will never be consistency problems due
to one thread modifying an internal query representation that is accessed by
another thread via its AQT.

This WL adopts the pure iterator approach because:
- the abstract model is clear - it is more ore less a 1:1 mapping to the
  internal representation, thus it will be much faster to design, and
- there is no problem with keeping the AQT consistent with the internal
  representation at any point.
If we ever get to a point where we need to materialize an AQT, then the AQT
may be used to map the internal AST into some new explicit representation.


1.2 Implementation of the pure iterator approach

This section describes two alternative approaches to implement the "pure
iterator" approach.

1.2.1 Iterator with dynamic properties

One possible way to design the AQT interface based on the "pure iterator"
approach is to provide only one iterator class that internally walks over the
internal AST representation. The AQT iterator doesn't create any new objects,
it only returns references to existing ones. The class is generic and
extensible via:
* A separate type hierarchy for all nodes in the AST. Essentially this type
  hierarchy is a meta-model of SQL itself.
* A dynamic list of properties for each different type of node.

The meta-type hierarchy is stored separately via singleton objects of type
"Type" (the idea for this meta-type hierarchy is pretty old, and exists in a
number of systems). We describe the meta-type hierarchy in more detail after
the AQT description.

class AQTIterator {
  AQTIterator(query);

/* Navigation */
  /*
    Each of the following 5 methods changes the current node. We cannot
    get a node object per se, but we can inspect any node via the
    "Inspection" group of methods below.
  */
  next() -> {T, F}
  previous() ->  {T, F}
  parent() ->  {T, F}
  first_child() ->  {T, F}
  child(i) ->  {T, F}
  

/* Query Optimization */
  mark_as_non_pushdownable()
  substitute_node(constant)
  substitute_subtree(constant or query fragment)
  /* hey! the above 2 methods are the basis for a real query rewrite interface! */

/* Inspection */
  /* Stable, well-known properties common to all nodes. */
  name() -> String
  type() -> Type
  /* Dynamic properties */
  prop(String name) -> String value
  properties() -> collection of property names (of type String)

/* Evaluation */
  bind(data) -> {T, F} /* Success/failure. */
  val_int() -> int
  val_bool() -> bool
  val_string() -> String

/* Other */
  /* Print the SQL text representation of the current node. */
  print_sql() -> String
  /* Print the current node in GraphVis format to be visualized. */
  print_graph() -> String
}


NOTES:
* node types may change dynamically
* each node has different properties depending on the node type - how to
  engineer this:
  - do nothing and use prop() and properties() methods,
  - for each node type have a list of supported properties
  - construct explicit objects of the specific node type:
    - every time a new object or
    - create one singleton per object type, and then smash the object
      every time we need to inspect it.
* 

NOTES:
- Consider applying the visitor pattern: similar to BOOST define "event points"
during the
  processing of each node. Each event may call a specific "visitor" function.
  In this way we can implement pre-order, post-order processing and other
  algorithms. See:
  http://www.boost.org/doc/libs/1_36_0/libs/graph/doc/breadth_first_search.html
  http://www.boost.org/doc/libs/1_36_0/libs/graph/doc/depth_first_search.html


1.2.1.1 Meta-type hierarchy for all meta-database objects.

In any case we need to present to the user a data model of our internal
representation. Such models are present in other DBMSes as well, and make it
much simpler to understand, navigate and inspect a query representation.

Below is an incomplete rough sketch of such a model. This needs to be
discussed. The meta-type model supports multiple inheritance.

Since this document is in text format, we use the following notation:
Type_name(property_1, ..., property_N) /* description */


Object(name, property_list)
  Type
    Number
      Integer
      Float
      ...
    String
    Date
    etc.
  Table
    StoredTable
      SystemTable---\
                      SystemTemporaryTable
      TemporaryTable-/
      RegularTable
    DerivedTable
      View
      Query
      Join
      TableFunction /* (it is also a Function) */
      ...
  Column
  Clause /* instances: SELECT, FROM, WHERE, etc. */
    ...
  Function (argument_arity, result_arity, ...)
    BooleanFunction /* AND, OR, NOT, IN, etc. */
    AggregateFunction /* SUM, COUNT, etc. */
    ScalarFunction
    ...

Every meta-type has a name, and a property list. Where it makes sense, meta-type
objects may have other properties as well.

1.2.1.2 Example of iteration over a query representation via AQTIterator

Suppose we want to implement a SE with the following capabilities:
- only SELECT, FROM, WHERE clauses
- no expressions, only projection (that is list of columns in SELECT),
  and conjunction of ranges
- no joins

The overall form of the SQL dialect the SE "understands" is:

SELECT col_1, ..., col_N
FROM table
WHERE
  (const_1 [<,>,=] col_i OR const_2 [<,>,=] col_j)
  AND
  ... other range conditions ...;


Then one possible way to check whether this SE can process a query fragment is
the following algorithm:

procedure check_qf_pushdownability(query)
{
  AQTIterator qf_iterator(query);

    	
}


1.2.2 Multiple-inheritance with up/down-casting

The general idea is to create a parallel "public" type hierachy for a minimal
subset of the internal representation classes. These "public" or "external"
types have only public methods, no properties, and no instances. We achieve
information hiding by multiple inheritance between the public and the internal
classes, and by casting internal objects into the nearest public superclass.

TODO: detailed explanation with examples from our discussion

struct TABLE
{
  TABLE_SHARE	*s;
  TABLE *share_next, **share_prev;
};

class Public_Table : private TABLE {
    char *get_name() { return s->table_name.str; }
    Public_Table *next_in_list() { return (Public_Table *)next; }
    bool GIS_Is_Supported () { return s->db_type->flags & HA_CAN_GEOMETRY; }
    Public_Table *next_InnoDB_table() {
      TABLE *n;
      for (n=next; n; n=n->next)
        if (n->s->db_type == innobase_hton_ptr)
          return n;
      return NULL;
    }
    void mark_as_pushdownable();
}

//*+++++++****************************************+

  ...
  TABLE *table;
  ...
  handler->AQT_push_down((Public_Table*)table);


Item_func()....


Public_Item_func() {
  next_child(iterator_state *state) {
    if (state->current == 0)
      return arg0;
    return arguments[state->current++ - 1];
  }
}


3. handler.h  changes overview

Very rough overview of proposed changes to handler.h

Note: The AQT_Push_down() method needs to be part of handlerton 
since "joins" correspond to multiple tables possibly from different engines.
Some single table "filter condition" components may be pushed to handler
interface, but that will be next lower level design.

struct handlerton {
....
.....
   int  (*AQT_push_down_analyze)(handlerton *hton, THD *thd,
                  Query *qtree, PushDownContext *ctxt);
}

Note: The analyze routine will mark what fragments are pushable, will attach
costs for each pushable fragment, etc. The real pushdown would happen by
different mechanism at later point of time.


Note: The pointer to THD is needed as it represents the Connection Object.
      The push down analysis may depend on the connection state.
      We could possibly define class Connection { ...... ; THD *thd; }
      and replace THD * by Connection *; Neverthless, we should provide 
      a way of getting into THD if one really wants to get hold of it.

TODO:  Describe in detail what goes in and goes out from the "ctxt".
       SE may include some handles to the supported fragments in the context,
       which may be reused when the query fragment is actually pushed down.


// for the description of SQL_Object, see HLS.

class Query : public SQL_object {

   int  qtype; // corresponds to enum_sql_command SQLCOM_SELECT,
               // SQLCOM_CREATE_TABLE etc (in sql_lex.h)

   //  TBD: Following items could also be moved to SQL_object
   virtual void print();
   virtual String  toString();     // Java Style. SQL+some info
   //  Provide tree traverse function. TraverseOrder: PRE/POST/IN-Order
   virtual void  traverse_cond(QtraverseFuncType tfunc, void *arg,
               TraverseOrder order);
}

// This class is same/similar as specified in HLS by timour
class Query_select : public Query {
    List   select_clause;
    Table_reference    from_clause;
    Value_expression_boolean  where_clause;
    List    group_by_clause;
    Value_expression_boolean  having_clause;
    List    order_by_clause;
    Limit_specification       limit_clause;
    bool     isDistinct;
}