Notes Tip: Put the following on your .psqlrc file: \set PSQL_EDITOR "/usr/bin/vim -c ':set ft=sql'" initdb -D ./postgres pg_ctl -D ./postgres -l logfile start vi logfile psql postgres pg_ctl -D ./postgres -m smart stop pgsteps(){ cat <logfile 2>&1 & /usr/local/pgsql/bin/createdb test /usr/local/pgsql/bin/psql test EOF } download postgres from : postgresql.org Latest released version: 8.2.6 Latest development version 8.3RC2 web interface for postgres source: http://developer.postgresql.org/cvsweb.cgi/pgsql/ official solaris uses following configure options: PGVER=8.3pgdg ROOT= CFLAGS= ./FLAGS are on Sparc: "-xO3 -xarch=v8 -xspace -W0,-Lt -Xa -xildoff -W2,-xwrap_int -xCC" On x86/64: "-xO3 -xarch=386 -xchip=pentium -xspace -Xa -xildoff -xCC" ./configure CC=/opt/SUNWspro/bin/cc \ "CFLAGS=$CFLAGS" \ "LD_OPTIONS=-R/usr/sfw/lib -L/usr/sfw/lib -M /usr/lib/ld/map.noexstk" \ MAKE=/usr/bin/gmake \ --prefix=/postgres/$PGVER \ --exec-prefix=/postgres/$PGVER \ --bindir=/postgres/$PGVER/bin \ --libexecdir=/postgres/$PGVER/bin \ --sbindir=/postgres/$PGVER/bin \ --datadir=/postgres/$PGVER/share \ --sysconfdir=/postgres/$PGVER/etc \ --mandir=/postgres/$PGVER/man \ --libdir=/postgres/$PGVER/lib \ --includedir=/postgres/$PGVER/include \ --sharedstatedir=/var/postgres/$PGVER \ --localstatedir=/var/postgres/$PGVER \ --enable-nls \ --with-docdir=/postgres/$PGVER/doc \ --with-system-tzdata=/usr/share/lib/zoneinfo \ --with-tcl -with-perl --with-python \ --with-pam --with-openssl --with-krb5 \ --without-readline \ --with-libxml --with-libxslt \ --enable-thread-safety \ --enable-dtrace \ --with-includes=/usr/include:/usr/sfw/include:/usr/include/kerberosv5 \ --with-tclconfig=/usr/lib \ --with-libs=/usr/lib:/usr/sfw/lib gmake DTRACE_BITS=32 'LD_OPTIONS=-R$$ORIGIN/../lib:/usr/sfw/lib -L/usr/sfw/lib \ -M /usr/lib/ld/map.noexstk' MAKE=/usr/bin/gmake all gmake DESTDIR=$ROOT MAKE=/usr/bin/gmake install pgcompile() { #./configure --prefix=$PWD/pgmyver --enable-debug MAKE=gmake CC=cc CXX=CC # --enable-dtrace ./configure --prefix=$PWD/pgmyver MAKE=gmake CC=/opt/studio12/SUNWspro/bin/cc gmake MAKE=/usr/bin/gmake all } pgsolarishelp(){ cat < * Enable the PostgreSQL service. This also has the very nice side effect of initializing the database $ /usr/sbin/svcadm enable postgresql:version_82 Then check to make sure it's running: $ svcs | grep postgresql It should show it as enabled. If it doesn't, then run svcs -x and it will provide you with more information about what went wrong. Note that you may need to wait about thirty seconds for it to start for the first time, as it is initializing the database. That's it. You now have PostgreSQL running as a service. It will restart every time you reboot. You can connect to it easily in NetBeans. You can follow these instructions, using the username 'postgres' and an empty password. If all is set up correctly, the connection should succeed and you're ready to go. EOF } RPE pointer for postgres: http://rpe.sfbay/bin/view/Tech/PostgreSQL pg_config - displays configure options. SUNWpostgr-libs == postgres client libraries SUNWpostgr == postgres client programs (and some libs) SUNWpostgr-contrib == includes misc contributed items such as pgbench, /usr/share/pgsql/contrib/* SUNWpostgr-devel == contains header files. needed to compile C or C++ programs which directly interact with a Postgre server and the ecpg Embedded C PostgreSQL preprocessor. SUNWpostgr-server == contains server binaries SUNWpostgr-server-data == creates default data dir layout. In Nevada: PHP database support is in separate pkg. see CR: 6658437 is created to track this issue /usr/php5/5.2.4/modules/pgsql.so f none 0755 root bin 138280 51538 1200102616 SUNWphp524-pgsql OSR for pgbouncer: https://opensourcereview.central.sun.com/app?action=ViewReq&review_type=Full&traq_num=7681 https://clustra.norway.sun.com/twiki/bin/view/PostgreSQL/DBTGWorkPlanForFY08 =================================================================== 1. How to checkout files from postgres main site ? 2. What features are supported by postgres ? JDBC 3/4 compliant ? 3. What ODBC drivers are available for postgres/ on which platforms ? 4. Is there a demo application using postgres ? 5. What drivers are supported for postgres (like .NET, perl, php etc) ? 6. What drivers are supported for MySQL (like .NET, perl, php etc) ? 7. Is there netbeans support for postgres ? =================================================================== pgpointers(){ cat< 0 AS hastriggers FROM pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace WHERE c.relkind = 'r'::"char"; BEGIN; UPDATE accounts SET balance = balance - 100.00 WHERE name = 'Alice'; SAVEPOINT my_savepoint; UPDATE accounts SET balance = balance + 100.00 WHERE name = 'Bob'; -- oops ... forget that and use Wally's account ROLLBACK TO my_savepoint; UPDATE accounts SET balance = balance + 100.00 WHERE name = 'Wally'; COMMIT; Dollar Quoted String Constants $$Dianne's horse$$ $SomeTag$Dianne's horse$SomeTag$ Escape string constants: E'myback\b\r' - E for escape. Escape sequences are allowed. B'1001' - bit string constants 1.925e-3 - numeric constants Type cast syntax: REAL '1.23' -- string style 1.23::REAL -- PostgreSQL (historical) style '1.23'::REAL CAST ( '1.23' AS REAL ) REAL('1.23) also allowed. Positional Parameters For example, consider the definition of a function, dept, as: CREATE FUNCTION dept(text) RETURNS dept AS $$ SELECT * FROM dept WHERE name = $1 $$ LANGUAGE SQL; subscripts: mytable.arraycolumn[4] mytable.two_d_column[17][34] $1[10:42] ; Array slice (arrayfunction(a,b))[42] OPERATOR(schema.operatorname); e.g. Operator(pg_catalog.+) Scalar Subqueries SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) FROM states; The subquery returns no rows, then it is considered as null constant. Array Constructor: SELECT ARRAY[1,2,3+4]; array --------- {1,2,7} (1 row) ROW type: CREATE TYPE myrowtype AS (f1 int, f2 text, f3 numeric); myrowtype is of type "ROW" and it is compatible with table type. i.e. you can pass ROW where you expect a table in a function. Numeric type: price numeric The numeric type can store fractional components, as would be typical of monetary amounts. CREATE TABLE products ( product_no integer DEFAULT nextval('products_product_no_seq'), ... ); same as: CREATE TABLE products ( product_no SERIAL, mydate Date DEFAULT now(); ... ); 5.3.1. Check Constraints CREATE TABLE products ( product_no integer, name text, price numeric CHECK (price > 0) ); CREATE TABLE products ( product_no integer, name text, price numeric CONSTRAINT positive_price CHECK (price > 0) ); Unique constraint, primary key, primary key(a, c) Foreign key: CREATE TABLE orders ( order_id integer PRIMARY KEY, product_no integer REFERENCES products (product_no), quantity integer ); The std also supports specifying this: ....., FOREIGN KEY(product_no) REFERENCES products(product_no) ..... CREATE TABLE order_items ( product_no integer REFERENCES products ON DELETE RESTRICT, order_id integer REFERENCES orders ON DELETE CASCADE, quantity integer, PRIMARY KEY (product_no, order_id) ); System Columns: oid, - don't assume it is unique across tables. use serial instead explicitly created only with create table ... with oids; tableoid - useful when tables inherit. Privileges: GRANT UPDATE ON accounts TO joe; GRANT UPDATE ON accounts TO public; There are several different privileges: SELECT, INSERT, UPDATE, DELETE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE. REVOKE ALL ON accounts FROM PUBLIC; Schemas: Database contains one or more named schemas, which in turn contain tables. CREATE SCHEMA myschema; Actually you refer: database.schema.table CREATE TABLE myschema.mytable ( ... ); Often you drop schemaname which is then same as "username": CREATE SCHEMA schemaname AUTHORIZATION username; Every new database contains "public" schema. All tables go here by default. postgres=# show search_path; search_path ---------------- "$user",public (1 row) SET search_path TO myschema,public; Each database has access to the system catalog pg_catalog schema. CREATE TABLE cities ( name text, population float, altitude int -- in feet ); CREATE TABLE capitals ( state char(2) ) INHERITS (cities); Avoid inheritance due to constraints inheritance problems. explain select * from pg_tables; You can implement key partitioning by doing something like this: 1. Define a plpgsql function. 2. Define a trigger which calls plpgsql function on each insert CREATE OR REPLACE FUNCTION measurement_insert_trigger() RETURNS TRIGGER AS $$ BEGIN INSERT INTO measurement_y2008m01 VALUES (NEW.*); RETURN NULL; END; $$ LANGUAGE plpgsql; After creating the function, we create a trigger which calls the trigger function: CREATE TRIGGER insert_measurement_trigger BEFORE INSERT ON measurement FOR EACH ROW EXECUTE PROCEDURE measurement_insert_trigger(); INSERT INTO products (product_no, name, price) VALUES (1, 'Cheese', 9.99), (2, 'Bread', 1.99), (3, 'Milk', 2.99); Qualified joins T1 { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 ON boolean_expression T1 { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 USING ( join column list ) T1 NATURAL { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 The words INNER and OUTER are optional in all forms. INNER is the default; LEFT, RIGHT, and FULL imply an outer join. To put this together, assume we have tables t1: num | name -----+------ 1 | a 2 | b 3 | c and t2: num | value -----+------- 1 | xxx 3 | yyy 5 | zzz then we get the following results for the various joins: => SELECT * FROM t1 CROSS JOIN t2; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 1 | a | 3 | yyy 1 | a | 5 | zzz 2 | b | 1 | xxx 2 | b | 3 | yyy 2 | b | 5 | zzz 3 | c | 1 | xxx 3 | c | 3 | yyy 3 | c | 5 | zzz (9 rows) => SELECT * FROM t1 INNER JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 3 | c | 3 | yyy (2 rows) => SELECT * FROM t1 INNER JOIN t2 USING (num); num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows) => SELECT * FROM t1 NATURAL INNER JOIN t2; num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows) => SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 2 | b | | 3 | c | 3 | yyy (3 rows) => SELECT * FROM t1 LEFT JOIN t2 USING (num); num | name | value -----+------+------- 1 | a | xxx 2 | b | 3 | c | yyy (3 rows) => SELECT * FROM t1 RIGHT JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 3 | c | 3 | yyy | | 5 | zzz (3 rows) => SELECT * FROM t1 FULL JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 2 | b | | 3 | c | 3 | yyy | | 5 | zzz (4 rows) The join condition specified with ON can also contain conditions that do not relate directly to the join. This can prove useful for some queries but needs to be thought out carefully. For example: => SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num AND t2.value = 'xxx'; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 2 | b | | 3 | c | | (3 rows) Table Functions CREATE TABLE foo (fooid int, foosubid int, fooname text); CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$ SELECT * FROM foo WHERE fooid = $1; $$ LANGUAGE SQL; SELECT * FROM getfoo(1) AS t1; SELECT select_list FROM ... [WHERE ...] GROUP BY grouping_column_reference [, grouping_column_reference]... SELECT x, sum(y) FROM test1 GROUP BY x; SELECT DISTINCT select_list ... -- applies for the entire row only. query1 UNION [ALL] query2 query1 INTERSECT [ALL] query2 query1 EXCEPT [ALL] query2 SELECT select_list FROM table_expression ORDER BY sort_expression1 [ASC | DESC] [NULLS { FIRST | LAST }] [, sort_expression2 [ASC | DESC] [NULLS { FIRST | LAST }] ...] SELECT select_list FROM table_expression [ ORDER BY ... ] [ LIMIT { number | ALL } ] [ OFFSET number ] VALUES (1, 'one') == select 1, 'one' ; Data types: Name Aliases Description bigint int8 signed eight-byte integer bigserial serial8 autoincrementing eight-byte integer bit [ (n) ] fixed-length bit string bit varying [ (n) ] varbit variable-length bit string boolean bool logical Boolean (true/false) box rectangular box in the plane bytea binary data ("byte array") character varying [ (n) ] varchar [ (n) ] variable-length character string character [ (n) ] char [ (n) ] fixed-length character string cidr IPv4 or IPv6 network address circle circle in the plane date calendar date (year, month, day) double precision float8 double precision floating-point number inet IPv4 or IPv6 host address integer int, int4 signed four-byte integer interval [ (p) ] time span line infinite line in the plane lseg line segment in the plane macaddr MAC address money currency amount numeric [ (p, s) ] decimal [ (precision, scale) ] arbitrary precision.scale=number_of_decimals path geometric path in the plane point geometric point in the plane polygon closed geometric path in the plane real float4 single precision floating-point number smallint int2 signed two-byte integer serial serial4 autoincrementing four-byte integer text variable-length character string time [ (p) ] [ without time zone ] time of day time [ (p) ] with time zone timetz time of day, including time zone timestamp [ (p) ] [ without time zone ] date and time timestamp [ (p) ] with time zone timestamptz date and time, including time zone tsquery text search query tsvector text search document txid_snapshot user-level transaction ID snapshot uuid universally unique identifier xml XML data Compatibility: The following types (or spellings thereof) are specified by SQL: bigint, bit, bit varying, boolean, char, character varying, character, varchar, date, double precision, integer, interval, numeric, decimal, real, smallint, time (with or without time zone), timestamp (with or without time zone), xml. char - 1 byte text - unlimited char array name - 64 bytes char. used for object names. bytea - variable length binary string. 1 or 4 bytes plus actual binary string. Escaped char: E'\\xxx' (octal value) ; SELECT E'\\001'::bytea; Enum Type: CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); XMLPARSE (DOCUMENT 'Manual...') XMLPARSE (CONTENT 'abcbarfoo') CREATE TABLE sal_emp ( name text, pay_by_quarter integer[], schedule text[][] -- postgres treats single,multi-dimensional arrays same at runtime. -- size specified also gets ignored by postgres. ); Declaration of Composite Types CREATE TYPE complex AS ( r double precision, i double precision ); Row expression syntax can also be used to construct composite values ROW('fuzzy dice', 42, 1.99) ROW('', 42, NULL) The ROW keyword is actually optional as long as you have more than one field in the expression, so these can simplify to: ('fuzzy dice', 42, 1.99) ('', 42, NULL) INSERT INTO mytab (complex_col) VALUES((1.1,2.2)); Object Identifier Types using a user-created table's OID column as a primary key is discouraged. OIDs are best used only for references to system tables. Use bigserial instead. Table 9-44. Session Information Functions Name Return Type Description current_database() name name of current database current_schema() name name of current schema current_schemas(boolean) name[] names of schemas in search path optionally including implicit schemas current_user name user name of current execution context inet_client_addr() inet address of the remote connection inet_client_port() int port of the remote connection inet_server_addr() inet address of the local connection inet_server_port() int port of the local connection pg_my_temp_schema() oid OID of session's temporary schema, or 0 if none pg_is_other_temp_schema(oid) boolean is schema another session's temporary schema? pg_postmaster_start_time() timestamp with time zone server start time session_user name session user name user name equivalent to current_user version() text PostgreSQL version information Setting config : SELECT set_config('log_statement_stats', 'off', false); Partial Index: CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) WHERE success; domain = base type with a constraint. Pseudo types = any, anyarray, anyelement, anyenum, anynonarray, cstring, internal, void, opaque, trigger etc polymorphictypes = any, anyarray, etc. User-Defined Functions PostgreSQL provides four kinds of functions: * query language functions (functions written in SQL) (Section 34.4) CREATE FUNCTION add(integer, integer) RETURNS integer AS 'select $1 + $2;' LANGUAGE SQL * procedural language functions (functions written in, for example, PL/pgSQL or PL/Tcl) (Section 34.7) CREATE FUNCTION somefunc() RETURNS integer AS $$ << outerblock >> DECLARE quantity integer := 30; BEGIN RAISE NOTICE 'Quantity here is %', quantity; -- Prints 30 quantity := 50; -- Create a subblock DECLARE quantity integer := 80; BEGIN ... END; RETURN quantity; END; $$ LANGUAGE plpgsql; It loads plpgsql.so to interpret this -- not part of core postgres distributin? Others: pltcl.so* plperl.so* plpython.so* See src/test/regress/sql for more examples. * internal functions - statically linked. Users can create alias. * C-language functions - Loaded from dynamic library funcs.so : CREATE FUNCTION add_one(integer) RETURNS integer AS 'DIRECTORY/funcs', 'add_one' LANGUAGE C STRICT; Types: variable%TYPE is allowed in plpgsql declaration. useful in dynamic type suchas anyelem etc. name table_name%ROWTYPE; e.g. CREATE FUNCTION merge_fields(t_row table1) RETURNS text AS $$ DECLARE t2_row table2%ROWTYPE; BEGIN SELECT * INTO t2_row FROM table2 WHERE ... ; RETURN t_row.f1 || t2_row.f3 || t_row.f5 || t2_row.f7; END; $$ LANGUAGE plpgsql; SELECT merge_fields(t.*) FROM table1 t WHERE ... ; A trigger is defined for each statement or for each row. Depending on that following variables are defined for trigger funcs: NEW Data type RECORD; variable holding the new database row for INSERT/UPDATE operations in row-level triggers. This variable is NULL in statement-level triggers. OLD Data type RECORD; variable holding the old database row for UPDATE/DELETE operations in row-level triggers. This variable is NULL in statement-level triggers. TG_NAME Data type name; variable that contains the name of the trigger actually fired. TG_WHEN Either BEFORE or AFTER etc. This is how you create the language: CREATE LANGUAGE plpgsql; For a language not known in the pg_pltemplate catalog, a sequence such as this is needed: CREATE FUNCTION plsample_call_handler() RETURNS language_handler AS '$libdir/plsample' LANGUAGE C; CREATE LANGUAGE plsample HANDLER plsample_call_handler; create language pltcl; -- or pltclu for unrestricted tcl access. Note the clause STRICT, frees you from handing null input values: if a null value is passed, the function will not be called but will just return null. User defined server functions (and pl interpretors) could use the SPI_* server API to implement query execution in server. Users and Groups and Tablespaces are shared across databases in a cluster but nothing else. The public, information_schema, user_defined_schema are all specific to a database. The user defined schema provides a new namespace for your tables. that's all. The information_schema provides SQL standard access to table/etc structure definitions. The most tables in information_schema are implemented as views. \d == list tables \l == list databases \d table = list columns There are 2 internal schemas: information_schema and pg_catalog They are not displayed by default, but they are there. (these are also db specific) To list the tables in these schema's do the following : select table_name from information_schema.tables where table_schema = 'pg_catalog'; This is same as: \dS === List system catalog tables; There are 70+ pg_* system catalog tables in pg_catalog schema. These include pg_database, pg_tables, pg_views, pg_triggers, etc. There are 40+ rows in information_schema catalog tables (mostly views). These include tables, views, triggers, sql_features, etc. (Get these: select table_name from information_schema.tables where table_schema = 'information_shema') To see the definition, do: \d information_schema.triggers information_schema tables definition makes heavy use of pg_catalog tables/views. Informational \d [NAME] describe table, index, sequence, or view \d{t|i|s|v|S} [PATTERN] (add "+" for more detail) list tables/indexes/sequences/views/system tables \da [PATTERN] list aggregate functions \db [PATTERN] list tablespaces (add "+" for more detail) \dc [PATTERN] list conversions \dC list casts \dd [PATTERN] show comment for object \dD [PATTERN] list domains \df [PATTERN] list functions (add "+" for more detail) \dg [PATTERN] list groups \dn [PATTERN] list schemas (add "+" for more detail) \do [NAME] list operators \dl list large objects, same as \lo_list \dp [PATTERN] list table, view, and sequence access privileges \dT [PATTERN] list data types (add "+" for more detail) \du [PATTERN] list users \l list all databases (add "+" for more detail) \z [PATTERN] list table, view, and sequence access privileges (same as \dp) Getting postgres source : cvs from http://www.nongnu.org/cvs/ cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login -- supply any password file. .cvspass created and reused later. cvs -z3 -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co -P pgsql cvs -z3 update -d -P pgbouncer provide connection pooling mech for postgres: http://pgfoundry.org/projects/pgbouncer/ psycopg2: Deliver Python DBAPI 2.0 driver for PostgreSQL http://www.initd.org/tracker/psycopg/wiki/PsycopgTwo Ruby driver for postgres: http://rubyforge.org/projects/ruby-pg libpqxx : http://pqxx.org/ -- C++ client library for postgres ODBC : http://www.unixodbc.org/ slony : http://slony.info pgsnmp : http://pgsnmpd.projects.postgresql.org/ pgpool-II : is a connection pooling/replication server for PostgreSQL. pljava : PL/Java is a free add-on module that brings Java? Stored Procedures, Triggers, and Functions to the PostgreSQL? backend via the standard JDBC interface. Postgres Migration: http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL pgpool: http://pgpool.projects.postgresql.org/ / PGcluster: http://pgcluster.projects.postgresql.org/ / Slony: http://slony.info/ / Sequoia: http://sequoia.continuent.org / GORDA: http://gorda.di.uminho.pt/ http://www.pgcon.org/2007/schedule/track/Basics/index.en.html PgFoundry is the PostgreSQL Development Group's site for developing and publishing PostgreSQL-related software that is not part of the core product. jdbc name to connect to postgres: jdbc:postgresql://localhost:5432/postgres jdbc Drivername for postgreSQL : org.postgresql.Driver ( /usr/postgres/8.2/jdbc/postgresql.jar ) Derby connection url: "jdbc:derby:mydb;create=true"; "jdbc:derby:mydb;user=dbuser;password=dbuserpwd"; Derby separate client: org.apache.derby.jdbc.ClientDriver Derby embedded: org.apache.derby.jdbc.EmbeddedDriver (embedded driver is here: /opt/SUNWjavadb/lib/derby.jar) skype tools for postgres : https://developer.skype.com/SkypeGarage/DbProjects/SkyTools http://pgfoundry.org/projects/skytools/ Debug postgres : postgres.c : exec_simple_query(query) : parser.c: List * raw_parser(char *str) : Returns list of parse trees for all commands in the string; src/include/nodes/parsenodes.h 78 /* 79 * Query - 80 * Parse analysis turns all statements into a Query tree (via transformStmt) 81 * for further processing by the rewriter and planner. 82 * 88 * Planning converts a Query tree into a Plan tree headed by a PlannedStmt 89 * node --- the Query structure is not used by the executor. 90 */ typedef struct Query { NodeTag type; CmdType commandType; /* select|insert|update|delete|utility */ IntoClause *intoClause; /* target for SELECT INTO / CREATE TABLE AS */ FromExpr *jointree; /* table join tree (FROM and WHERE clauses) */ List *targetList; /* target list (of TargetEntry) */ List *returningList; /* return-values list (of TargetEntry) */ List *groupClause; /* a list of GroupClause's */ Node *havingQual; /* qualifications applied to groups */ List *distinctClause; /* a list of SortClause's */ List *sortClause; /* a list of SortClause's */ Node *limitOffset; /* # of result tuples to skip (int8 expr) */ Node *limitCount; /* # of result tuples to return (int8 expr) */ List *rowMarks; /* a list of RowMarkClause's */ Node *setOperations; /* set-operation tree if this is top level of * a UNION/INTERSECT/EXCEPT query */ } primnodes.h 1029 * NOTE: in the raw output of gram.y, a join tree contains RangeVar, 1030 * RangeSubselect, and RangeFunction nodes, which are all replaced by 1031 * RangeTblRef nodes during the parse analysis phase. Also, the top-level 1032 * FromExpr is added during parse analysis; the grammar regards FROM and 1033 * WHERE as separate. 1034 * ---------------------------------------------------------------- How to debug postgres : $gdb postgres (gdb) run -B 20 -D ~/data/ -d 1 -s mydatabase < ScanQueries.sql b ReadCommand select * from t1 as one , t1 as two where i > 0; List * parse_tree_list = pg_parse_query(query_string); Parse State: T_SelectStmt { targetList := List{ResTarget} => {ColumnRef}=>List{T_String.i.e.Value}=>"*" fromClause := List{RangeVar} => {one, two} => {relname="t1",alias="one"}, {relname="t1",alias="two"}, whereClause:= A_Expr {lexpr=ColumnRef, rexpr=A_Const, name=">"} Query *query = transformStmt(pstate, Node *parseTree); Query: $14 = { type = T_Query, commandType = CMD_UNKNOWN, querySource = QSRC_ORIGINAL, canSetTag = 0 '\0', utilityStmt = 0x0, resultRelation = 0, intoClause = 0x0, hasAggs = 0 '\0', hasSubLinks = 0 '\0', rtable = 0x0, jointree = 0x0, targetList = 0x0, returningList = 0x0, groupClause = 0x0, havingQual = 0x0, distinctClause = 0x0, sortClause = 0x0, limitOffset = 0x0, limitCount = 0x0, rowMarks = 0x0, setOperations = 0x0 } transformFromClause(pstate, stmt->fromClause); ==> { RangeTableEntry *rte = transformTableEntry(pstate, (RangeVar *) n); // what is the diff bet rte and rangetableref ? RangeTableRef is a node which just contains index integer pointing to RTE parser state relnamespace and varnamespace updated with these new 2 tables. } /* transform targetlist */ qry->targetList = transformTargetList(pstate, stmt->targetList);=={ For both RTE, call expandRTE(RangeTblEntry *rte) to expand for "*" } /* mark column origins */ markTargetListOrigins(pstate, qry->targetList); /* transform WHERE */ qual = transformWhereClause(pstate, stmt->whereClause, "WHERE"); /* transform HAVING */ qry->havingQual = transformWhereClause(pstate, stmt->havingClause, "HAVING"); // process select into/create table as queries qry->intoClause = stmt->intoClause; ... /* End of parse_analyze() */ querytree_list = pg_rewrite_query(query); List *plantree_list = pg_plan_queries(querytree_list, 0, NULL, true); For table defintion, See Relation definition here: include/postgresql/server/utils/rel.h typedef struct RelationData { RelFileNode rd_node; /* relation physical identifier */ /* use "struct" here to avoid needing to include smgr.h: */ struct SMgrRelationData *rd_smgr; /* cached file handle, or NULL */ BlockNumber rd_targblock; /* current insertion target block, or * InvalidBlockNumber */ int rd_refcnt; /* reference count */ bool rd_istemp; /* rel uses the local buffer mgr */ bool rd_isnailed; /* rel is nailed in cache */ bool rd_isvalid; /* relcache entry is valid */ char rd_indexvalid; /* state of rd_indexlist: 0 = not valid, 1 = * valid, 2 = temporarily forced */ SubTransactionId rd_createSubid; /* rel was created in current xact */ SubTransactionId rd_newRelfilenodeSubid; /* new relfilenode assigned in * current xact */ /* * rd_createSubid is the ID of the highest subtransaction the rel has * survived into; or zero if the rel was not created in the current top * transaction. This should be relied on only for optimization purposes; * it is possible for new-ness to be "forgotten" (eg, after CLUSTER). * Likewise, rd_newRelfilenodeSubid is the ID of the highest * subtransaction the relfilenode change has survived into, or zero if not * changed in the current transaction (or we have forgotten changing it). */ Form_pg_class rd_rel; /* RELATION tuple */ TupleDesc rd_att; /* tuple descriptor */ Oid rd_id; /* relation's object id */ List *rd_indexlist; /* list of OIDs of indexes on relation */ Bitmapset *rd_indexattr; /* identifies columns used in indexes */ Oid rd_oidindex; /* OID of unique index on OID, if any */ LockInfoData rd_lockInfo; /* lock mgr's info for locking relation */ RuleLock *rd_rules; /* rewrite rules */ MemoryContext rd_rulescxt; /* private memory cxt for rd_rules, if any */ TriggerDesc *trigdesc; /* Trigger info, or NULL if rel has none */ /* * rd_options is set whenever rd_rel is loaded into the relcache entry. * Note that you can NOT look into rd_rel for this data. NULL means "use * defaults". */ bytea *rd_options; /* parsed pg_class.reloptions */ /* These are non-NULL only for an index relation: */ Form_pg_index rd_index; /* pg_index tuple describing this index */ struct HeapTupleData *rd_indextuple; /* all of pg_index tuple */ /* "struct HeapTupleData *" avoids need to include htup.h here */ Form_pg_am rd_am; /* pg_am tuple for index's AM */ /* * index access support info (used only for an index relation) * * Note: only default operators and support procs for each opclass are * cached, namely those with lefttype and righttype equal to the opclass's * opcintype. The arrays are indexed by strategy or support number, which * is a sufficient identifier given that restriction. * * Note: rd_amcache is available for index AMs to cache private data about * an index. This must be just a cache since it may get reset at any time * (in particular, it will get reset by a relcache inval message for the * index). If used, it must point to a single memory chunk palloc'd in * rd_indexcxt. A relcache reset will include freeing that chunk and * setting rd_amcache = NULL. */ MemoryContext rd_indexcxt; /* private memory cxt for this stuff */ RelationAmInfo *rd_aminfo; /* lookup info for funcs found in pg_am */ Oid *rd_opfamily; /* OIDs of op families for each index col */ Oid *rd_opcintype; /* OIDs of opclass declared input data types */ Oid *rd_operator; /* OIDs of index operators */ RegProcedure *rd_support; /* OIDs of support procedures */ FmgrInfo *rd_supportinfo; /* lookup info for support procedures */ int16 *rd_indoption; /* per-column AM-specific flags */ List *rd_indexprs; /* index expression trees, if any */ List *rd_indpred; /* index predicate tree, if any */ void *rd_amcache; /* available for use by index AM */ /* use "struct" here to avoid needing to include pgstat.h: */ struct PgStat_TableStatus *pgstat_info; /* statistics collection area */ } RelationData;