Useful Links: December 2009

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Script to back up a MySQL database (e.g. for WordPress blogs)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Jeff Atwood’s recent blog failure has probably served as a timely reminder to website owners across the ‘net – and I’m pleased to see that Jeff got his site back (some of us would be less lucky).  At the time, I tweeted this:

“RT @edbott: Every webmaster’s nightmare: ^MW Feeling sorry for @codinghorror and thanking @alexcoles for my backups”

To which the response from my hosting provider came back as “er… what backups?!”.  Thankfully, that was an attempt at humour (either that or Alex had forgotten that he’d helped me set some backups last summer – it helps to be on first name terms with the guys who look after your Internet presence)!

In common with many blogs, markwilson.it runs on self-hosted WordPress and the key to its existence is a MySQL database.  Themes, etc. can be recreated… the data cannot (at least not easily).

This is where the following script comes in.  Run as a cron job on my host’s server, I have it set up to make a copy of my WordPress database and e-mail it to me (which means I have archive copies in my Google mail).  It took some tweaking (by Alex) to run without errors and I’d like to link to it’s original source but the link I have appears to be dead – I’ve left the original author’s comments intact though – hopefully he won’t mind:


#! /usr/bin/php -q
dev/null

*/

// configure your database variables below:
$dbhost = ‘localhost’; // Server address of your MySQL Server
$dbuser = ‘db_username’; // Username to access MySQL database
$dbpass = ‘db_password’; // Password to access MySQL database
$dbname = ‘db_name’; // Database Name

// Options You May Optionally Configure

$use_gzip = “no”; // Set to No if you don’t want the files sent in .gz format
$remove_sql_file = “yes”; // Set this to yes if you want to remove the .sql file after gzipping. Yes is recommended.
$remove_gzip_file = “yes”; // Set this to yes if you want to delete the gzip file also. I recommend leaving it to “no”

// Configure the path that this script resides on your server.

$savepath = “/home/username/wpbackup”; // Full path to this directory. Do not use trailing slash!

$send_email_w_file = “yes”; // Do you want this database backup sent to your email? Fill out the next 2 lines
$to = “emailaddress”; // Who to send the emails to
$from = “emailaddress”; // Who should the emails be sent from?
$senddate = date(“j F Y”);
$subject = “markwilson.it WordPress backup”; // Subject in the email to be sent.
$headers = “From: $from”;
if ($send_email_w_file == “yes”)
{
$message = “Attached is a MySQL dump of sitename dated $senddate.”;
// Brief Message.
}
else
{
$message = “Your MySQL database has been backed up and placed in $savepath.”; // Brief Message.
}

$use_ftp = “no”; // Do you want this database backup uploaded to an ftp server? Fill out the next 4 lines
$ftp_server = “localhost”; // FTP hostname
$ftp_user_name = “ftp_username”; // FTP username
$ftp_user_pass = “ftp_password”; // FTP password
$ftp_path = “/”; // This is the path to upload on your ftp server!

// Do not Modify below this line! It will void your warranty!

//———GZIP CLASS
class zipfile
{

var $datasec = array(); // array to store compressed data
var $ctrl_dir = array(); // central directory
var $eof_ctrl_dir = “\x50\x4b\x05\x06\x00\x00\x00\x00”; //end of Central directory record
var $old_offset = 0;

function add_dir($name)

// adds “directory” to archive – do this before putting any files in directory!
// $name – name of directory… like this: “path/”
// …then you can add files using add_file with names like “path/file.txt”
{
$name = str_replace(“\\”, “/”, $name);

$fr = “\x50\x4b\x03\x04”;
$fr .= “\x0a\x00”; // ver needed to extract
$fr .= “\x00\x00”; // gen purpose bit flag
$fr .= “\x00\x00”; // compression method
$fr .= “\x00\x00\x00\x00”; // last mod time and date

$fr .= pack(“V”,0); // crc32
$fr .= pack(“V”,0); //compressed filesize
$fr .= pack(“V”,0); //uncompressed filesize
$fr .= pack(“v”, strlen($name) ); //length of pathname
$fr .= pack(“v”, 0 ); //extra field length
$fr .= $name;
// end of “local file header” segment

// no “file data” segment for path

// “data descriptor” segment (optional but necessary if archive is not served as file)
$fr .= pack(“V”,$crc); //crc32
$fr .= pack(“V”,$c_len); //compressed filesize
$fr .= pack(“V”,$unc_len); //uncompressed filesize

// add this entry to array
$this -> datasec[] = $fr;

$new_offset = strlen(implode(“”, $this->datasec));

// ext. file attributes mirrors MS-DOS directory attr byte, detailed
// at http://support.microsoft.com/suppor…s/Q125/0/19.asp

// now add to central record
$cdrec = “\x50\x4b\x01\x02″;
$cdrec .=”\x00\x00″; // version made by
$cdrec .=”\x0a\x00″; // version needed to extract
$cdrec .=”\x00\x00″; // gen purpose bit flag
$cdrec .=”\x00\x00″; // compression method
$cdrec .=”\x00\x00\x00\x00”; // last mod time & date
$cdrec .= pack(“V”,0); // crc32
$cdrec .= pack(“V”,0); //compressed filesize
$cdrec .= pack(“V”,0); //uncompressed filesize
$cdrec .= pack(“v”, strlen($name) ); //length of filename
$cdrec .= pack(“v”, 0 ); //extra field length
$cdrec .= pack(“v”, 0 ); //file comment length
$cdrec .= pack(“v”, 0 ); //disk number start
$cdrec .= pack(“v”, 0 ); //internal file attributes
$ext = “\x00\x00\x10\x00”;
$ext = “\xff\xff\xff\xff”;
$cdrec .= pack(“V”, 16 ); //external file attributes – ‘directory’ bit set

$cdrec .= pack(“V”, $this -> old_offset ); //relative offset of local header
$this -> old_offset = $new_offset;

$cdrec .= $name;
// optional extra field, file comment goes here
// save to array
$this -> ctrl_dir[] = $cdrec;

}

function add_file($data, $name)

// adds “file” to archive
// $data – file contents
// $name – name of file in archive. Add path if your want

{
$name = str_replace(“\\”, “/”, $name);
//$name = str_replace(“\\”, “\\\\”, $name);

$fr = “\x50\x4b\x03\x04”;
$fr .= “\x14\x00”; // ver needed to extract
$fr .= “\x00\x00”; // gen purpose bit flag
$fr .= “\x08\x00”; // compression method
$fr .= “\x00\x00\x00\x00”; // last mod time and date

$unc_len = strlen($data);
$crc = crc32($data);
$zdata = gzcompress($data);
$zdata = substr( substr($zdata, 0, strlen($zdata) – 4), 2); // fix crc bug
$c_len = strlen($zdata);
$fr .= pack(“V”,$crc); // crc32
$fr .= pack(“V”,$c_len); //compressed filesize
$fr .= pack(“V”,$unc_len); //uncompressed filesize
$fr .= pack(“v”, strlen($name) ); //length of filename
$fr .= pack(“v”, 0 ); //extra field length
$fr .= $name;
// end of “local file header” segment

// “file data” segment
$fr .= $zdata;

// “data descriptor” segment (optional but necessary if archive is not served as file)
$fr .= pack(“V”,$crc); //crc32
$fr .= pack(“V”,$c_len); //compressed filesize
$fr .= pack(“V”,$unc_len); //uncompressed filesize

// add this entry to array
$this -> datasec[] = $fr;

$new_offset = strlen(implode(“”, $this->datasec));

// now add to central directory record
$cdrec = “\x50\x4b\x01\x02″;
$cdrec .=”\x00\x00″; // version made by
$cdrec .=”\x14\x00″; // version needed to extract
$cdrec .=”\x00\x00″; // gen purpose bit flag
$cdrec .=”\x08\x00″; // compression method
$cdrec .=”\x00\x00\x00\x00”; // last mod time & date
$cdrec .= pack(“V”,$crc); // crc32
$cdrec .= pack(“V”,$c_len); //compressed filesize
$cdrec .= pack(“V”,$unc_len); //uncompressed filesize
$cdrec .= pack(“v”, strlen($name) ); //length of filename
$cdrec .= pack(“v”, 0 ); //extra field length
$cdrec .= pack(“v”, 0 ); //file comment length
$cdrec .= pack(“v”, 0 ); //disk number start
$cdrec .= pack(“v”, 0 ); //internal file attributes
$cdrec .= pack(“V”, 32 ); //external file attributes – ‘archive’ bit set

$cdrec .= pack(“V”, $this -> old_offset ); //relative offset of local header
// echo “old offset is “.$this->old_offset.”, new offset is $new_offset
“;
$this -> old_offset = $new_offset;

$cdrec .= $name;
// optional extra field, file comment goes here
// save to central directory
$this -> ctrl_dir[] = $cdrec;
}

function file() { // dump out file
$data = implode(“”, $this -> datasec);
$ctrldir = implode(“”, $this -> ctrl_dir);

return
$data.
$ctrldir.
$this -> eof_ctrl_dir.
pack(“v”, sizeof($this -> ctrl_dir)). // total # of entries “on this disk”
pack(“v”, sizeof($this -> ctrl_dir)). // total # of entries overall
pack(“V”, strlen($ctrldir)). // size of central dir
pack(“V”, strlen($data)). // offset to start of central dir
“\x00\x00”; // .zip file comment length
}
}

//——— end class

$date = date(“Ymd-Hi”);
$filename = “$savepath/$dbname-$date.sql”;
passthru(“/usr/bin/mysqldump –opt -h$dbhost -u$dbuser -p$dbpass $dbname >$filename”);

if($use_gzip==”yes”){
$zipfile = new zipfile();

// add the subdirectory … important!
$zipfile -> add_dir($savepath);

// add the binary data stored in the string ‘filedata’
$file6 = fopen($filename,’rb’);
$filedata = fread($file6,filesize($filename));
fclose($file6);
//$filedata = “(read your file into $filedata)”;
$zipfile -> add_file($filedata, $filename);

// OR instead of doing that, you can write out the file to the loca disk like this:
$filename3 = $savepath.”/”.$dbname.”.”.$date.”_sql.tar.gz”;
$fd = fopen ($filename3, “wb”);
$out = fwrite ($fd, $zipfile -> file());
fclose ($fd);

//$zipline = “tar -czf “.$dbname.”.”.$date.”_sql.tar.gz “.$dbname.”-“.$date.”.sql”;
//echo shell_exec($zipline);
}

if($use_gzip==”yes”)
{
$filename2 = $filename3;
}
else
{
$filename2 = “$savepath/$dbname-$date.sql”;
}

if($send_email_w_file == “yes” )
{
$fileatt_type = filetype($filename2);
$fileatt_name = “”.$dbname.”-“.$date.”_sql”; // change to _sql.tar.gz if gzip file

// Read the file to be attached (‘rb’ = read binary)
$file = fopen($filename2,’rb’);
$data = fread($file,filesize($filename2));
fclose($file);

// Generate a boundary string
$semi_rand = md5(time());
$mime_boundary = “==Multipart_Boundary_x{$semi_rand}x”;

// Add the headers for a file attachment
$headers .= “\nMIME-Version: 1.0\n” .”Content-Type: multipart/mixed;\n” .” boundary=\”{$mime_boundary}\””;

// Add a multipart boundary above the plain message
$message = “This is a multi-part message in MIME format.\n\n” .”–{$mime_boundary}\n” .”Content-Type: text/plain; charset=\”iso-8859-1\”\n” .”Content-Transfer-Encoding: 7bit\n\n” .
$message . “\n\n”;

// Base64 encode the file data
$data = chunk_split(base64_encode($data));

// Add file attachment to the message
$message .= “–{$mime_boundary}\n” .”Content-Type: {$fileatt_type};\n” .” name=\”{$fileatt_name}\”\n” .”Content-Disposition: attachment;\n” .” filename=\”{$fileatt_name}\”\n” .”Content-Transfer-Encoding: base64\n\n” .
$data . “\n\n” .”–{$mime_boundary}–\n”;
}

// Send the message
$ok = @mail($to, $subject, $message, $headers);
if ($ok)
{
echo “Database backup created and sent! File name $filename2”;
}
else
{
echo “Mail could not be sent. Sorry!”;
}

if($use_ftp == “yes”)
{
$ftpconnect = “ncftpput -u $ftp_user_name -p $ftp_user_pass -d debsender_ftplog.log -e dbsender_ftplog2.log -a -E -V $ftp_server $ftp_path $filename2”;
shell_exec($ftpconnect);
echo “

$filename2 Was created and uploaded to your FTP server!

“;
}

if($remove_sql_file==”yes”)
{
exec(“rm -r -f $filename”);
}

if($remove_gzip_file==”yes”)
{
exec(“rm -r -f $filename2”);
}
?>

All you need to do is: save this on a web server somewhere; edit the variables at top of the script; and set up a cron job to run it at whatever schedule suits.

For many people, that should be enough to ensure that their WordPress installation is safe from any catastrophe (note: no warranty is implied and I’m not responsible if things do go wrong for you).  For me though, I have an additional consideration as my images are not in the database… maybe that will be the subject of a future blog post.  In the meantime, I can sleep soundly knowing that the database behind my site is backed up regularly.

Controlling who sees which Twitter status updates (tweets)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

These days, many of the items that would once have warranted a blog post end up as one of my bookmarks on delicious (posted monthly to this blog in a “useful links” post, with the help of Postalicious) or on my Twitter stream (and I will start to blog the more useful tweets here soon) but I have to admit that, for a long-time tech blogger, I’m still a bit of a newbie when it comes to social media.

I’ve learned a lot though in the last 24 hours – last night I retweeted (the old way – using RT and adding a comment with ^MW then my text – rather than using Twitter’s new retweet function) and the original author thought I was trying to make them look bad.  If getting a message into 140 characters is difficult (even worse than avoiding unintentional emotion in e-mail, which is essentially an emotionless medium), getting it into a 20 or 30 character comment on someone else’s tweet is tough.  Thankfully, that was all resolved with a few more tweets this morning but I noticed that some of the messages I saw were using .@username and that got me wondering what the . is for.

Googling . and @ is not easy so I asked @brynmorgan, who I’d seen use this method, and he explained that the . broadcasts his message to all of his followers, because unless someone also follows me they don’t see a normal @ reply.  I’d never quite understood if my followers saw all of the responses that I posted to @username and now it makes sense.  I thought that might be useful for others so, if assuming I have understood this correctly:

  • @username will direct a tweet to a specific user and if someone follows both the sender and the specified username, they will see the message.
  • .@username will direct a tweet to a user and all of the sender’s followers will see the message.
  • D username is a direct message between two Twitter users (i.e. a private message).

Thanks to Bryn for educating me and, by the way, there’s some interesting commentary on social media over on the Brynovation blog.

Writing a macro to e-mail a worksheet from an Excel workbook

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I spent most of today trying to catch up with my expenses (in order to plug the rather large hole in my bank balance before Christmas).  I work for a large IT company, and to say that our systems are antiquated would be an understatement: I understand that my expense claims are entered onto a computer system prior to payment but, before that, I have to complete an Excel spreadsheet and mail a hard copy, with receipts attached, to a team that processes them (which is corporate-speak for wasting my time, their time, and my manager’s time to quibble over minor infringements).  Recently a new level of bureaucracy was added to the process and, before snail-mailing the hard copy to be processed, I also have to e-mail a soft copy to my manager for approval, using a pre-defined format for the mail subject header.

You can probably tell by the tone of this post that I’m no fan of this process.  I understand from contacts at Microsoft, for example, that their system is entirely electronic, although paper receipts do also need to be submitted for audit purposes and I can’t see why we couldn’t do something similar.  Still, it could be worse: when I worked for a major fashion design, marketing and retail organisation a few years back, they insisted that I staple each receipt to a sheet of A4 paper first…

Anyway, after messing up the process a couple of times today and recalling messages with incorrectly formatted subjects, I decided that it’s something that should be automated.  I’ve never written any Visual Basic for Applications (VBA) before, but, armed with a few code snippets from the web, I managed to write a macro (the whole thing took me about 30 minutes to pull together and debug):

Sub SendToManager()
'
' SendToManager Macro
' Macro to send completed expense worksheet to one's Manager
'
' Keyboard Shortcut: Ctrl+Shift+M
'

'Create a new Workbook Containing 1 Sheet (right most) and sends as attachment.

ThisWorkbook.Sheets(ThisWorkbook.Sheets.Count).Copy

With ActiveWorkbook

.SendMail Recipients:="mail@domain.com", Subject:="Expenses for approval: " " & Range("C8").Value & ", " & Range("O8").Value & ", " & Format(Range("O9").Value, "Long Date") & ", " & Format(Range("Q48").Value, "Currency")

.Close SaveChanges:=False

End With

End Sub

The code is here for anyone that might find something similar useful… I’m sure that it will need modification to suit someone else’s requirements but the basic idea is here.  Basically, we create a copy of the right-most worksheet in our Excel workbook (I create a new sheet for each claim, and work left to right…), then we take that and send it to the specified recipient (one change might be to prompt for a user name) and format the subject with data from the sheet that eventually reads “Expenses for approval: name, employee number, claim date, claim value” before sending the mail.  Simple really.

Here are a few links that helped me out in doing this:

Migrating infrastructure services to a Windows Server 2008 R2 computer

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Having built a low-power server to run my home infrastructure, I need to get moving on decommissioning the old virtual machines so I can turn off the Dell PowerEdge 840 that runs them.

The first step was to migrate the Active Directory Domain Services from my existing Windows Server 2003 R2 SP2 installation to the new Windows Server 2008 R2 machine:

  1. On the 2003 DC, insert the 2008 R2 DVD and open a command prompt.
  2. Locate adprep.exe (or adprep32.exe if running on a 32-bit architecture – I was already running 64-bit) in the \support\adprep folder (note the changed file location between Windows Server 2008 and 2008 R2 – it used to be in \sources\adprep) and run the following commands (theres more detail on these actions in Daniel Petri’s article on Windows Server 2008 ADprep):
    • adprep /forestprep (on the schema master for the forest)
    • adprep /domainprep (on the infrastructure master for each domain, after making sure that the domain is in at least Windows 2000 native mode)
    • adprep /domainprep /gpprep (on the infrastructure master for each domain)
    • adprep /rodcprep (if read only domain controllers are required in a Windows Server 2003 forest)
  3. After this, I ran dcpromo.exe on the new server, promoting it to a domain controller in the existing forest/domain, electing to make the server a DNS server and a Global Catalog server at the same time.
  4. With the new server running as a DC, I needed to transfer the FSMO roles.  I did this by following the advice in Microsoft knowledge base article 324801 to: register scmmgmnt.dll; run the Active Directory Schema and transfer the Schema Master role; run Active Directory Domains and Trusts and transfer the Domain Naming Master role; run Active Directory Users and Computers and transfer the RID Master, PDC Emulator and Infrastructure Master Roles.  Incidentally, even though I did this with the GUI tools, Adam Bell outlines a much smarter method to transfer FSMO roles using PowerShell.
  5. After checking that the new server’s DNS service successfully performed simple and recursive lookups (on the Monitoring tab in the DNS Server properties) then switching the new server’s primary DNS server to use itself (rather than the old DC), I ran dcpromo.exe on the 2003 server to demote it to a normal domain member, before ensuring that all computers were using the new (2008 R2) DNS server and removing the role from the 2003 computer.
  6. With Active Directory and DNS migrated, the last major service to move was DHCP (I do have some other services running on a separate server: TFTP, phone directory web service, etc. running on another server but they are application services really – this post will concentrate on the infrastructure).  This is pretty straightforward (details can be found on the Microsoft Enterprise Networking team blog) and involves a couple of commands – one to export from the 2003 R2 server and another to import on the 2008 R2 server:
    • netsh dhcp server export filename all
    • netsh dhcp server import filename all
  7. After confirming that the DHCP service was running on the target with all entries transferred, I stopped the DHCP Server service on the source (net stop "DHCP Server"), and renewed a client IP address (by starting up a PC, but running ipconfig /renew would have been as effective) to check that renewals worked before restarting the service (net start "DHCP Server"), deauthorising the original DHCP server and removing the DHCP role.
  8. If I was using the Encrypted File System or the server was a Terminal Services Licensing Server there would be some extra steps (not for me).
  9. Finally, with all services removed from the 2003 machine, I shut it down, deleted the virtual machine files from the host server, and removed the computer account from Active Directory, which can now have the forest and domain functional levels raised if necessary, as I’m completely free from legacy domain controllers.

Intel pairs its Atom with a more efficient chipset for low power PCs

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I wrote about my attempts to build a low-power infrastructure server for my home network (part 1 and part 2).  After I’d put all the pieces together, I saw a tweet from Jeff Atwood, highlighting a review of a new Intel board that uses a mobile 945GSE chipset instead of the power-sapping 945GC – allowing a much lower power unit to be built (albeit just with an Atom N270 – not the dual-core 330 that I used) .  Something like this would make a great PC for home use – although it’s never going to be a screamer for games, it should be pretty good for web surfing, email, and other everyday uses.

John Craddock whiteboards DirectAccess and Active Directory Recycle Bin

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I spent yesterday evening at an Active Directory User Group (ADUG) meeting where John Craddock presented on IPv6 and DirectAccess (repeating two of John’s TechEd Europe 2009 sessions).  I have to admit that, at times, I struggled to keep up as the technology went deep but it was extremely worthwhile.  I hope to find the time to write some blog posts to summarise some of key points from the evening; however John also has two 10 minute videos on the Dutch site NGN, in which he “whiteboards” Direct Access and Active Directory Recycle Bin.  These are worth a few minutes of your time to get a quick overview of two new technologies in Windows Server 2008 R2.