Archive for January, 2012

Perl Carp

Just really quick, for those perl visitors who have always wondered, what is it exactly that Carp does and why should I be using it?

Let’s assume you have a DB model that handles all db transactions via DBIx::Simple or DBI, etc.

package DB::Model;

use strict;
use warnings;
use Carp qw/croak/;
sub do {
  my $self = shift;
  my ($schema,$sql,@binds) = @_;
  #$self->logger->("Performing some task... : $sql on $schema");
  $self->db->query($sql,@binds) or croak $self->db->error;

Now, in some other part of your program (


use strict;
use warnings;

use DB::Model; # or from some IOC or DB provider in your application, etc.

$model->do('SELECT id FROM foo INNER JOIN bar USING(id) WHERE id = ? ',$id);

If there’s an error in your SQL statement, the ‘croak’ call will supply the database exception as such :

DBD::mysql::st execute failed: Column 'id' in where clause is ambiguous at lib/DB/ line xxx

Well, that’s nice, but it’s not very helpful if you have dozens of places in your code where you’re making that call to the database.

If you’re using Carp, you can easily turn on the verbose mode, and watch the stack trace come back to the original caller of the method / function.


use strict;
use warnings;
use Carp;

use DB::Model; # or from some IOC or DB provider in your application, etc.
$Carp::Verbose = 1; #- testing / debugging mode (you can even pass this via command line args)
$model->do('SELECT id FROM foo INNER JOIN bar USING(id) WHERE id = ? ',$id);

Which throws the trace :

DBD::mysql::st execute failed: Column 'id' in where clause is ambiguous at lib/DB/ line xxx
  DB::Model::do('DB::Model','SELECT id FROM foo INNER JOIN bar USING(id) WHERE id = ?',1) called at lib/DB/ line xxx
  Back on up to the original caller

This is really helpful while you’re writing your tests and in active development or are in debug mode.

When you’re done with your testing, you can remove the verbose mode.

Personally, when it’s available, I like to use croak to throw the exceptions in conjunction with a logger class so I can quickly evaluate the full SQL statement + parameters. I have, in times, added additional levels of exception handling by using Class::Exception and Try::Catch blocks around the exceptions for those can may be recovered, or need special warning messages / response codes (think Restful web services and http response codes for various resource exceptions, IE, Invalid Data, etc).

See more information here and here.

Multi-Tenant Database Design

@Work, we’re a Software as a Service (SaaS) (although we’re evolving into a platform) company providing data integration (perl ETL) and BI reporting and other industry (Business Equipment) based solutions such as territory mapping and compensation programs. Like most SaaS companies, database design, architecture and security are of the utmost importance, but how do you go about selecting the ‘right’ multi-tenant solution for your problem? As you start this requirement gathering and analysis process, you undoubtedly will start coming up with more questions, like :

  • Are there laws and regulations in place that your application / data storage solutions must conform to (Safe Harbor laws, encryption standards, credit cards, etc)?
  • How do you ensure only authenticated users have access to the data their authorized to view / manage?
  • How do we handle fail-over scenarios and reduce our risk for potential transactional data-loss.
  • How do you deal with backups, while keeping the aforementioned items in check?
  • What’s your restoration process, has it been tested (frequency of tests)?
  • How is the performance of your application going to be affected by any design decisions you make?
  • And so on and so on…

When dealing with Multi-Tenant data, I like to design one master schema (db in mysql), and then create one schema for each customer / client / vendor / tenant, etc. This design allows for a quick and easy encapsulation of the customers data, fast(er) backup and restorations processes and it can offer some natural security benefits if you tie the database user into the schema and of course YMMV.

Downtime and / or data loss in 2012 should be unacceptable (sure, there’s maintenance and migrations to consider, etc) to any SaaS providers and the costs associated with them should be VERY expensive. There are various ways to handle the fail-over configurations of MASTER / SLAVE relationships for this design and MySQL’s replication and / or a continuent solution may be the right one for you regarding this critical topic and again, YMMV. I’ve used a combination of solutions in the past, but the basic premise is to keep it as simple as possible while maintaining near 100% uptime and redundancy across the MASTER / SLAVE clusters.

There are many great articles available on the web that go into this concept in much greater detail than I will, with this one being one of the top resources available.


Happy New Year!

I’ve challenged myself to really make an effort and contribute more to my blog. I’ve set a goal to write 2-3 entries a month (this one won’t count)…

Some technical area’s I’ll be touching on will include :

– Webservers (nginx and apache)
– RESTful web services and best practices
– Perl (Moose, Mouse, Moo + OOP,MVC, refactoring, FAT Models)
– Javascript (JQuery, Sencha Ext Js + Touch)
– Databases (Postgres,MSSQL,Oracle,MySQL)
– Project Management
– IT Management

I’m also considering coming up with a Open Source project, hosting it on git-hub and creating several entries about the entire life-cycle of the project.

Have a great 2012 and thanks for visiting!