Archive for the ‘Uncategorized’ Category
A week after the release of PgLife, the site is averaging thirty active users. (I define an active user as an IP address that has viewed the site for at least five minutes during the past hour.) I consider that a success. Since the release of PgLife, I have increased the content update interval and added an
About page explaining the site’s purpose, which also includes the active user count.
The site uses AJAX, Perl, Procmail rules, and Apache to collect and deliver dynamic content. Recent improvements in the Postgres mailing list archive
feature set have made linking to emails much simpler.
PgLife was my first attempt at a dynamic website, and I learned a few things. First, I learned the value of having an alert file that can force a browser reload to push fixes and improvements to the browser. Second, I used the same file to allow pushing of news alerts to users, e.g. 9.3 Beta1. Third, I learned the importance of controlling browser and server caching and revalidation when using dynamic content.Tweet
I last wrote on this blog about hardware innovation driving up the cost of Oracle licenses. Because the computing power of a commodity server is soaring, so is the cost of Oracle licenses due to that company’s pricing policies. This has worked out well for us, as government agencies have been turning to PostgreSQL to replace Oracle in growing numbers. But there’s more to that mass migration story than hardware, and today I want to share a glimpse into what some of these agencies experience when turning to us to help them achieve success with PostgreSQL.
Let’s start with slashing database costs by over 90 percent, a scenario documented by a division within the federal government that is charged with evaluating enterprise technologies.
This particular client was exploring a cost-efficient solution for one of its organizations that needed to cut costs but not adversely affect its mission, which meant finding an open source database solution with equivalent or better features than its current, money-eating solution. After evaluating a few alternatives, including MySQL and Oracle, the organization conducting the study found that EnterpriseDB’s Postgres Plus Advanced Server provided the best performance while significantly cutting costs. When comparing Postgres Plus Advanced Server to Oracle, there was a staggering cost reduction of over 90 percent. Our solution also significantly ranked better on cost and performance than MySQL and a couple of other open source solutions.
We have other terrific stories to tell in the government sector, though as those of you who do business with the feds know well, naming the actual characters is rarely possible. There are a couple of agencies frequently referenced as PostgreSQL users, such as the Federal Aviation Administration (FAA), the US State Department and the National Weather Service. Suffice to say there are plenty of agencies that use PostgreSQL, many of which look to us to provide support and training for their deployments of community PostgreSQL or our own Postgres Plus Advanced Server.Tweet
New Postgres users are often confused by the variety of activities in the Postgres community. Even us old-timers can be overwhelmed in monitoring the Postgres community. To meet these two needs, I have created a website called PgLife which dynamically displays the current activity in each of the community areas: email, development, blogging, media, news, events, and IRC chat. I am hopeful people will find this useful in understanding the
Postgres community, and in increasing their involvement.
I have completed the 9.3 release notes so they are ready for beta release. They will be regularly updated until 9.3 final.Tweet
With our final 9.3 commit-fest almost complete, and few 9.3 open items, I have been asked to start on the 9.3 release notes. I have blogged about
this process in the past (1, 2), so I will not bore you with the details. I expect to be done in a week.
Postgres supports all the SQL-standard constraint types, e.g. UNIQUE, CHECK. These work fine at the server level:
TweetCREATE TABLE ledger (id SERIAL, balance NUMERIC(10,2) CHECK (balance > 0)); INSERT INTO ledger VALUES (DEFAULT, -2.00); ERROR: new row for relation "ledger" violates check constraint "ledger_balance_check" DETAIL: Failing row contains (1, -2.00).
SQL has never been good at handling dynamic columns ? it can easily process tables with a fixed number of columns, but dynamic column handling has always been a challenge. (Dynamic columns are often used to implement user-defined fields.) One approach has been to define columns for every possible value that might be stored in a row; this leads to sparse tables containing mostly NULL values. Because Postgres stores NULL values in a single bit, there is little overhead to this approach. However, it is hard to call these dynamic columns since every new column requires an ALTER TABLE … ADD COLUMN command.
Another approach is to use entity/attribute/value layout, where each value is stored in its own row. This obviously increases storage requirements, has performance challenges, and adds complexity to SQL queries.
Fortunately, Postgres has various options to efficiently handle dynamic columns. The simplest solution is arrays which allow multiple values of the same data type to be stored in a single field. This assumes that the values are independent, or that their ordinal position in the array is sufficiently meaningful.Tweet
In December, I reported that a bug had been discovered in how pg_upgrade handled invalid indexes, and that the next release of pg_upgrade
would throw an error if invalid indexes were found; that release happened in February. In an easily-overlooked development, last week’s
security release changed pg_dump so it now skips dumping invalid indexes. This allows the most recent version of pg_upgrade to upgrade clusters with invalid indexes, rather than throwing an error.
Post a Comment
- index only frequently-accessed values, allowing smaller index files
- avoid indexing common values, which are unlikely to be useful in an index
- smaller index files allow faster index traversal and improve index caching
- avoid index insertion overhead for non-indexed values
- constrain uniqueness to a subset of table rows
People usually don’t use expression or partial indexes initially, but there are certainly some use-cases that can only be solved with these index features.Tweet