We just concluded the PgCon Developer Meeting. The two big items from me were that EnterpriseDB has dedicated staff to start work on parallelizing Postgres queries, particularly in-memory sorts. I have previously expressed the importance (and complexity) of parallelism. Second, EnterpriseDB has also dedicated staff to help improve Pgpool-II. Pgpool is the swiss army knife of replication tools, and I am hopeful that additional development work will further increase its popularity.Tweet
Archive for the ‘Uncategorized’ Category
I finally have a chance to look at Vaadin version 7 now, which is already up to v7.0.6 (nice work team!). There is a wealth of information from Vaadin about the changes and migration path, but I’m skipping all that and jumping right to examples as an experiment. Thus, my first blog ever about a book.
I haven’t always liked “cookbook” style technical books (“Python Cookbook” is a great one), but I’m enjoying Packt Pub’s Vaadin 7 Cookbook a lot. It’s not trying to be a replacement for the Book of Vaadin, which is still a required reference. Instead, it’s a very easy-to-follow tour of things you’re either going to do, or will want to do once you know you can.
When I say it’s easy to follow, I mean the code examples are very concise and get to the point without becoming overwhelming. I could probably learn a lesson there in my own blog examples, heh. It helps that Vaadin applications are easy to construct, but it’s still some nice writing to get a fully-working, useful example written so that it can be explained in small chunks and works on its own. Each “recipe” introduces some end goal, walks you through the code, and then has an explanation of why it works. The formula works well. Then links are included to the Vaadin API or Book of Vaadin for more information. The links alone makes this a great reference source.
Overall, this is a great book of examples that cover a lot of common and not-so-common tasks in writing a Vaadin application. For the Vaadin newcomer, this book illustrates the power of the Vaadin framework very quickly. Reading it reminded me of how I felt when I first learned about Vaadin a thousand years ago. For the veteran developer, there will be things you haven’t tried yet, especially if you’re making the switch now from version 6 to 7.
If I have a complaint at all, it’s that some of the downloaded examples are maven projects (so there’s no setup to run), and others are just the source files. Still, it’s pretty simple to change to an example directory and run “rm -rf $dir/* && cp -R . $dir/” where $dir is the package in an already-made project. This works for most of them — don’t forget to move .js files to the ‘resources’ dir in a maven project. So it’s a very minor nit. Having all this example code in one place is easily worth buying the book.Tweet
A week after the release of PgLife, the site is averaging thirty active users. (I define an active user as an IP address that has viewed the site for at least five minutes during the past hour.) I consider that a success. Since the release of PgLife, I have increased the content update interval and added an
About page explaining the site’s purpose, which also includes the active user count.
The site uses AJAX, Perl, Procmail rules, and Apache to collect and deliver dynamic content. Recent improvements in the Postgres mailing list archive
feature set have made linking to emails much simpler.
PgLife was my first attempt at a dynamic website, and I learned a few things. First, I learned the value of having an alert file that can force a browser reload to push fixes and improvements to the browser. Second, I used the same file to allow pushing of news alerts to users, e.g. 9.3 Beta1. Third, I learned the importance of controlling browser and server caching and revalidation when using dynamic content.Tweet
I last wrote on this blog about hardware innovation driving up the cost of Oracle licenses. Because the computing power of a commodity server is soaring, so is the cost of Oracle licenses due to that company’s pricing policies. This has worked out well for us, as government agencies have been turning to PostgreSQL to replace Oracle in growing numbers. But there’s more to that mass migration story than hardware, and today I want to share a glimpse into what some of these agencies experience when turning to us to help them achieve success with PostgreSQL.
Let’s start with slashing database costs by over 90 percent, a scenario documented by a division within the federal government that is charged with evaluating enterprise technologies.
This particular client was exploring a cost-efficient solution for one of its organizations that needed to cut costs but not adversely affect its mission, which meant finding an open source database solution with equivalent or better features than its current, money-eating solution. After evaluating a few alternatives, including MySQL and Oracle, the organization conducting the study found that EnterpriseDB’s Postgres Plus Advanced Server provided the best performance while significantly cutting costs. When comparing Postgres Plus Advanced Server to Oracle, there was a staggering cost reduction of over 90 percent. Our solution also significantly ranked better on cost and performance than MySQL and a couple of other open source solutions.
We have other terrific stories to tell in the government sector, though as those of you who do business with the feds know well, naming the actual characters is rarely possible. There are a couple of agencies frequently referenced as PostgreSQL users, such as the Federal Aviation Administration (FAA), the US State Department and the National Weather Service. Suffice to say there are plenty of agencies that use PostgreSQL, many of which look to us to provide support and training for their deployments of community PostgreSQL or our own Postgres Plus Advanced Server.Tweet
New Postgres users are often confused by the variety of activities in the Postgres community. Even us old-timers can be overwhelmed in monitoring the Postgres community. To meet these two needs, I have created a website called PgLife which dynamically displays the current activity in each of the community areas: email, development, blogging, media, news, events, and IRC chat. I am hopeful people will find this useful in understanding the
Postgres community, and in increasing their involvement.
I have completed the 9.3 release notes so they are ready for beta release. They will be regularly updated until 9.3 final.Tweet
With our final 9.3 commit-fest almost complete, and few 9.3 open items, I have been asked to start on the 9.3 release notes. I have blogged about
this process in the past (1, 2), so I will not bore you with the details. I expect to be done in a week.
Postgres supports all the SQL-standard constraint types, e.g. UNIQUE, CHECK. These work fine at the server level:
TweetCREATE TABLE ledger (id SERIAL, balance NUMERIC(10,2) CHECK (balance > 0)); INSERT INTO ledger VALUES (DEFAULT, -2.00); ERROR: new row for relation "ledger" violates check constraint "ledger_balance_check" DETAIL: Failing row contains (1, -2.00).
SQL has never been good at handling dynamic columns ? it can easily process tables with a fixed number of columns, but dynamic column handling has always been a challenge. (Dynamic columns are often used to implement user-defined fields.) One approach has been to define columns for every possible value that might be stored in a row; this leads to sparse tables containing mostly NULL values. Because Postgres stores NULL values in a single bit, there is little overhead to this approach. However, it is hard to call these dynamic columns since every new column requires an ALTER TABLE … ADD COLUMN command.
Another approach is to use entity/attribute/value layout, where each value is stored in its own row. This obviously increases storage requirements, has performance challenges, and adds complexity to SQL queries.
Fortunately, Postgres has various options to efficiently handle dynamic columns. The simplest solution is arrays which allow multiple values of the same data type to be stored in a single field. This assumes that the values are independent, or that their ordinal position in the array is sufficiently meaningful.Tweet