E.20. Postgres-XL Release 9.5r1.2

E.20.1. Migration to Version Postgres-XL 9.5r1.2
E.20.2. Changes

Release Date

2016-07-27

This release contains a variety of fixes from Postgres-XL 9.5r1.1 release. For information about new features in the Postgres-XL 9.5r1 major release, see Section E.22.

E.20.1. Migration to Version Postgres-XL 9.5r1.2

A dump/restore is not required for those running Postgres-XL 9.5r1 or Postgres-XL 9.5r1.1

E.20.2. Changes

  • Fix a bug which would reset planner statistics of a table on the coordinator when an index is created on the table or when CLUSTER command is run on the table.

  • Make sure that the count of tuples affected by INSERT/DELETE/UPDATE is correctly computed when Fast-Query-Shipping is used.

  • Show remote node/backend and originating coordinator node/backend details in the ps output when a remote subplan is being executed on a datanode for each of debugging.

  • Ensure "init all" (and friends) does not remove existing data directories unless "force" option is specified by the caller.

  • Handle ON COMMIT actions on temporary tables appropriately on the datanodes.

  • Avoid pushing down evaluation of VALUES clause to a datanode for replicated tables, unless it contains volatile function(s).

    This should provide a good performance boost for affected cases by avoiding another connection, thus reducing connection overhead as well as latency.

  • Fix a memory like while running ALTER TABLE DISTRIBUTE BY.

  • Use GTM_Sequence type to hold value of a sequence on GTM.

    We were incorrectly using "int" at couple of places which is not wide enough to store 64-bit sequence values.

  • Never ever use an invalid XID, if we fail to connect to the GTM.

    Earlier a node would happily proceed further if GTM becomes dead or unreachable. This may result in random problems since rest of the code is not prepared to deal with that situation (as seen from the crash in TAP tests).

  • Do not compute the relation size everytime rescanning a relation with sequence scan.

    This can provide big boost to performance when the inner side of a nested loop has a sequence scan with large number of rows in outer relation.

  • Block FOR SHARE/UPDATE for queries involving joins

    Per report from Shaun Thomas, we don't yet support row locking when query has a join between tables. While it may sometimes give an error, worse it may silently lock wrong rows leading to application logic failures. The feature is now blocked until more appropriate fix is available.

  • Add several few regression test cases.

  • Use 2^32 modulo computation to convert signed integer to unsigned value since abs() may give a different result.

    This makes the redistribution code in sync with the way hash modulo is computed elsewhere in the code. Earlier versions may have redistributed replicated tables wrongly when their distribution strategy is changed to hash or modulo distribution.

  • Load balance remote subplan execution by choosing a node randomly instead of always picking up the first node.

    When planner has a choice of executing a subplan on any of the remote nodes, it always used to execute the subplan on the first node. That can cause excessive load/number of connections on that node. This change fixes that by choosing a node randomly from the list of available nodes.