1 | 2

1 - 5   [9]

Selecting Next and Previous - MySQL and PEAR::MDB2

Also relates to PHP and MySQL

Quite a considerable time ago I discussed methods for selecting the next and previous records from a MySQL database using MySQL variables. This method required sending multiple requests to the database via the abstraction layer - at the time I was using PEAR::DB.

Things have progressed a bit since early 2004 with more recent versions of MySQL introducing sub-select queries and PHP5's PEAR::MDB2 abstraction layer offering better handling of parameters in SQL queries. Such that the entire process of selecting the next and previous record can be simplified into a single query. Let us dive straight into the code and then I will break it down:


$query = "SELECT IF(publish_date < :publish_date, 'prev', 'next') as position, title, CONCAT(:archive_dir, permalink) as link FROM news WHERE id IN
          (
            SELECT id FROM news WHERE 
              id = ( SELECT id FROM news WHERE publish_date < :publish_date AND post_status = 'published' ORDER by publish_date DESC LIMIT 0,1 ) 
                OR
              id = ( SELECT id FROM news WHERE publish_date > :publish_date AND post_status = 'published' ORDER by publish_date ASC LIMIT 0,1 )
          )";

$parameters = array(
  'publish_date' => $this->dataset->publish_date,
  'archive_dir' => self::getDirective('ss_news_archive_dir')
);
        
$links = $this->db->getAssoc($query, null, $parameters);

The objective of the above code is to get the next and previous news entries published. So comparison is done on the publish dates relative to the current record.

Preparing to execute the query establishes two parameters (in $parameters) which will be passed to the MDB2 query. The publish date of the current record ($this->dataset->publish_date) is the first parameter while the second parameter is a configuration directive for the current News module.

Note that the actual call to the MDB2 layer ($this->db) requires the use of the Extended module so establishing a connection to the database may be as follows:


$this->db = MDB2::factory($dsn);
$this->db->loadModule('Extended');
$this->db->setFetchMode(MDB2_FETCHMODE_OBJECT);

Now to the actual SQL query. Working backwards from the inside of the query the following three processes are taking place:

  1. The ID of the previous and next records are retrieved by comparing the publish dates with the current record. This is analogous to the engineering described in the original post back in 2004.
  2. These two results are then fed into a Select query for a single column
  3. Which then allows this column to become the search parameters for the actual SQL query which returns the complete set of data for the next and previous records (using the IN operator)

It is the second phase in the process above which essentially acts as a glue and allows what would otherwise have to be a set of unique queries to be amalgamated into a single query. A typical result from the above code might be:


Array
(
  [prev] => stdClass Object
  (
    [position] => prev
    [title] => The previous post
    [link] => /newsdesk/previous_post.html
  ),
  [next] => stdClass Object
  (
    [position] => next
    [title] => The next post
    [link] => /newsdesk/next_post.html
  )
)

I forgot to mention from a performance perspective this is very efficient and a few rough benchmark tests showed that execution time was similar to running one of the three (or four) queries required to achieve the same data without sub-selects. Performance could possibly be improved even further, if required, by using an inner join instead of sub-select (but I am yet to try this method).

Posted on Oct 14, 2008 at 12:00:19.

Resequence a MySQL column

Also relates to MySQL

The other day I was asked if I knew a way to resequence a column in a MySQL table. For example if a record were removed from mid-table, how to collapse all the ids above that record to avoid gaps. Of course this would be highly undesirable and potentially destructive if relationships exist, but made an interesting little problem.

The initial solution I suggested, utilising the PEAR_DB package as abstraction layer for the PHP MySQL API, was as follows:


$id = {num}; // the id of the record that was deleted
$pr = $db->prepare('UPDATE table_name SET col_name = ! WHERE col_name = !');
do {
  $res = $db->execute($pr, array($id, ++$id));  
  if (DB::isError($res)) {
    // handle error
    break;
  }
} while ($db->affectedRows() > 0);

Errors are trapped to allow break-out from the loop, and, since $db->affectedRows() can return -1 if the query fails, look for at least one (will be only one per loop execution) affected row.

While this seemed like a nice simple solution, it doesn't account for a column that AUTO_INCREMENTs, so assumes that values for col_name are entered manually. Then, courtesy of the MySQL cookbook, I stumbled across a more efficient solution that takes into account just that by utilising the DDL component of the MySQL query language:


$query = "ALTER TABLE table_name "
       . "DROP col_name, "
       . "ADD col_name INT UNSIGNED NOT NULL AUTO_INCREMENT FIRST, "
       . "AUTO_INCREMENT = 1";
$res = @mysql_query($query, $conn);
if (!$res) { // handle errors }

For simplicity, this just uses the standard PHP MySQL API. This will recreate col_name placing it first in the schema order (FIRST). Thus recreating the sequence starting the increment from 1 (AUTO_INCREMENT = 1 - prior to 3.23.39 only). Finally, there is no need to redefine the PRIMARY KEY if it is col_name, since MySQL does not drop the key during the execution of a single ALTER statement.

Personally I cannot think of any reason I would have to resequence a column, especially if it were the PRIMARY KEY, but nice to know it is possible?

Posted on Apr 16, 2005 at 14:51:16. [Comments for Resequence a MySQL column- 0]

Trendalicious

Also relates to Apache and Blogging

Trendalicious provides a nice way to keep track of popular links in the del.icio.us database of social bookmarks. Unfortunately trendalicious does not come equiped with syndication, but it only takes a bite of the PEAR to put a simple request page together with HTTP_Request, Cache_Lite and an RSS builder.


// code snippet
$cache = new Cache_Lite($cache_options);

if (! $data = $cache->get(TRENDALICIOUS_URL)) {

    $r = new HTTP_Request(TRENDALICIOUS_URL);
    $r->sendRequest();
    $page = $r->getResponseBody();

    // create rss content from data in $page

    $cache->save($rss_content);

}

header('Content-type: text/xml');
echo $rss_content;

Since I intended on plugging this into Firefox live feeds, I gave the cache file a lifetime of 60 minutes to avoid bombarding the trendalicious page. Also I wanted to keep the feed private (need to keep personal bandwidth down as well), so I set up some basic security measures with mod_rewrite, a subdomain?


RewriteCond   %{HTTP_HOST}       !^subdomain
RewriteCond   %{REMOTE_ADDR}  !^xxx\.xxx\.xxx\.
RewriteRule .* -  [F,L]

? and a directive in robots.txt file.

Posted on Apr 06, 2005 at 01:53:39. [Comments for Trendalicious- 0]

2004 Weather Recordings

Also relates to Apache and Meteorology

The Weather Centre Archives have now been updated with daily pressure and temperature readings for 2004. Annual temperature summaries over the four years I have been taking recordings are as follows:

Average annual temperature for the years 2001 to 2004
Year Maximum Minimum Average
2001 22.1 -0.9 8.9
2002 23.6 -0.9 10.1
2003 27.5 -4.6 10.4
2004 23.2 -2.9 10.2

The annual mean for the last three years all exceed the 1961 - 1990 mean CET surface temperature of 9.47°C and are consistent with the recurring higher than average mean annual temperatures recorded throughout the 1990's. Note, however, that my weather station is not housed in a Stevenson screen, so comparison between these readings and the principle meterological records should only be interpreted for trends.

The principle reason for the Weather Archives is to record local variation in the climate at my location. Then once enough data is collated to be able to develop a weather application to analyse and graphically represent the data.

To this end I hope to do more work on the Weather Archives application this year, and to get the ball rolling, I decided to tweak the current API, by incorporating a caching module for the graphs which are dynamically created with PHP. This was a fairly painless process involving some URL rewriting to conceal PHP files as JPEG images and Cache_Lite to store the image data.

For example, mod-rewrite will translate a request for the file
/temperature/2004/01.png
to
/create_graph.php?year=2004&month=01&mime=png.


RewriteRule
  temperature/([0-9]{4})/([0-9]{2})\.(jpg|png|gif)$ 
  /create_graph.php?year=$1&month=$2&mime=$3

The routine in create_graph.php will then look for (or create) cached data to serve as the corresponding mime-type -


$id = A_UNIQUE_ID_FOR_IMAGE

$cache = new Cache_Lite($options);
if (! $data = $cache->get($id)) {

    // here some code will use the GD Library 
    // to create the image resource $image
  
    ob_start();
    switch($mime) {
      case 'png':
         imagepng($image);
         break;
      
      // etc
    
    }

    $data = ob_get_contents();
    ob_end_clean(); 
    imagedestroy($image);
  
  }    
  $cache->save($data);
}  
unset($cache);


switch($mime) {
  case 'png':
     header("Content-type: image/png");
     break;
  
  // etc

echo $data;

So with that side of the API performance covered, I hope now to find time to develop this application further through the year. As well as graphical comparison of hot and cold day indicators, some interesting ideas with image transparencies and live requests to create comparative graph overlays spring to mind.

Posted on Jan 17, 2005 at 20:54:19. [Comments for 2004 Weather Recordings- 0]

Few More Changes

Also relates to Blogging

The comments forms on this blog has now been updated to accept a few XHTML tags again. Of course this means validating markup before submitting the data. This server is running PHP4, so unfotunately no access to the new Tidy Library. Therefore I had to look around for some code to handle the validation.

In a previous incarnation of this blog I had put Simon Willison's SafeHtmlChecker class to work. This time round I have decided to allow only a few inline elements. This keep formatting of the comments data to a minimum (for speed), and allows the XHTML validation to be plugged straight into my current validator class. To achieve this I have utilised a simple parser based around the PEAR::XML_HTMLSax package. This SAX based parser is ideal since it will not fail automatically on badly formed XML.

The new form is still in beta so I apologise if anything goes belly-up in use!

Posted on Aug 14, 2004 at 16:52:05. [Comments for Few More Changes- 6]

Breadcrumbs Trail

[ Home ] -> TW Blog -> PEAR
Site Map

The Severn Solutions website achieves the following standards:

[ XHTML 1.0 ] [ CSS 2 ] [ WAI AA ] [ Bobby AA ]

Page compiled in 0.006 seconds