Office 365 password resets… and disabling password expiry

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My Office 365 account password expired today and, somewhere in the midst of the password reset I managed to lock myself out.  As I only have one mailbox on the account (i.e. I am the administrator), that’s a bit of a problem…

I tried creating a service request to reset my password but I’m not sure it worked – I had no call-back and when I checked later in the Administrator control panel, there were no requests listed; however Dhaval Brahmbhatt (@DhavalBrahmbhat) gave me some UK phone numbers to try (0203 450 6455 or 0800 032 6417).

Using phone support I was able to log a password reset request, once the Technical Support Engineer had confirmed my details.  Because there was no phone number shown on my records, he had to email me so that I could respond with the details. Bearing in mind that I was locked out of my account, this could have been a problem but thankfully Outlook was still connected to Office 365 from my Mac.

After 26 minutes on the phone (at great expense to Microsoft, I guess), I finally had a temporary password to reset my account and then log in as normal.

Goodness knows how I’d have managed if I hadn’t been able to receive an email on the account – although the contact preferences on my Office 365 profile showed a phone number, there was no number in the information for my mailbox… so, lesson number 1, make sure you have a phone number in your mailbox properties (lesson 2 might be to have password resets sent to an alternative mailbox but that seems daft as it’s also where other announcements will end up…).

I’ve decided that I’ll reset my password when I feel like it, rather than when the system says so and making this change involves some PowerShell:

  • First up, install the Office 365 cmdlets (intended for enterprises, not all of them will work on small business accounts). There are two components to install: the Microsoft Online Services Sign-In Assistant; and the Microsoft Online Services Module for Windows PowerShell.
  • Next, connect PowerShell to Office 365 by either opening the Microsoft Online Services Module for PowerShell or opening a normal PowerShell session and typing import-module MSOnline.
  • Authenticate the session by typing Connect-MsolService

(An older method from Office 365 beta can be found in my previous post on changing the primary email address for Office 365 users – I haven’t tested it recently, but I see no reason why the standard Exchange cmdlets wouldn’t still work on Office 365)

  • Finally, disable password expiration with the following command (replacing MicrosoftOnlineServicesID with the appropriate username):
    Set-MsolUser -UserPrincipalName MicrosoftOnlineServicesID -PasswordNeverExpires $true

Souping up SyncToy

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I used to back up my work PC to a set of Virtual Hard Disk (.VHD) files until one day I needed to recover from a failure, and I found that the hard drive encryption software we use prevented me from running a restore. That forced me to find another solution and one of my ReadyNAS devices (sadly not the one that recently suffered two disk failures on the same RAID 1 volume, taking with it a big chunk of my data) is now dedicated to backing up my work PC, with a regular file copy taking place.

I have a drive mapped to a share on the NAS and the command line version of Microsoft’s SyncToy tool (synctoycmd.exe) is set to run as a scheduled task every evening at 10pm. Then, at 11pm, the NAS powers down until 8am the next day. The idea is that, as long as my PC is connected to my home network, it backs up all of the important files, at a time by which I should have stopped working.

Unfortunately I’m not convinced that it’s working as it should be – just because the Windows 7 Task Scheduler tells me that the task completed doesn’t mean that SyncToy ran successfully (incidentally, if you are having problems with SyncToy on Windows 7, this thread might help).  I was googling for a solution and came across eXDee’s batch files (sometimes the old ways are the best) to check for network connectivity, presence of the appropriate volume and then run synctoycmd.exe, recording a log file on the way. Bingo.

So, here are my versions (only minor updates from eXDee’s originals), called each night from Task Scheduler and a simple check of the lastsync.log file should tell me whether the backup worked or not.

Incidentally, don’t be fooled (as I was) by the synctoycmd.exe output that says it saved time by not copying any files. That’s the output from the preview run and there is a long period after this during which there are no status updates whilst the actual file copies take place.

synctoy.bat

This is the control file, to be called from Task Scheduler or run manually from the command line:
@echo off
title SyncToy run in progress…
echo Attempting file sync. Please wait…
sync.bat >lastsync.log

sync.bat

This is the file that checks for the presence of my NAS and for a mapped drive before it backs up my data. You’ll need to subsititue your own IP address but I’m particularly impressed by eXDee’s code to look for a TTL rather than a ping success/failure (smart move). Note I haven’t mapped a drive if the connection is not there, although that is a possible enhancement:
@echo off
echo SyncToy Log starting at
time /T
date /T
echo ##############################################
echo Checking connection to NAS…
echo ##############################################
PING -n 2 -w 10 192.168.1.14 |find “TTL=” && goto NAS
goto PINGFAIL

:NAS
echo ##############################################
echo NAS is online. Checking for share…
if exist “F:\Synced with Company PC\” goto SYNC
goto NASFAIL

:SYNC
echo ##############################################
echo Drive is mapped. Begin syncing files…
echo ##############################################
cd “C:\Program Files\SyncToy 2.1\”
SyncToyCmd.exe -R
if %ERRORLEVEL% == 0 goto SUCCESS
goto SYNCFAIL

:PINGFAIL
echo ##############################################
echo NAS not found. Exiting
goto END

:NASFAIL
echo ##############################################
echo Share not found. Exiting
goto END

:SUCCESS
echo ##############################################
echo Synctoy finished successfully. Exiting
goto END

:SYNCFAIL
echo ##############################################
echo Synctoy Failed. Exiting
goto END

:END
echo ##############################################
echo Synctoy Log ending at
time /T
date /T

lastsync.log

An example of a run (the failures were down to file access, rather than any issue with the scripts):

SyncToy Log starting at
21:00
08/11/2011
##############################################
Checking connection to NAS…
##############################################
Reply from 192.168.1.14: bytes=32 time=3ms TTL=64
Reply from 192.168.1.14: bytes=32 time=39ms TTL=64
##############################################
NAS is online. Checking for share…
##############################################
Drive is mapped. Begin syncing files…
##############################################
Preview of Work Folder Backup (C:\Users\markw\Documents\Work\, F:\Synced with company PC\Work\) in time 00:03:08:253.
SyncToy action was ‘Echo’
Found 2 actions to perform.
Found 47,158 files that did not require action.
Analyzed 250.5 files per second.
Avoided copying 135,013,767,205 bytes in 47,158 files.
Saved approximately 03:00:27:00 by not copying any files.

SyncToy run of Work Folder Backup (C:\Users\markw\Documents\Work\, F:\Synced with company PC\Work\) completed at 08/11/2011 21:03:27.
SyncToy action was ‘Echo’.
SyncToy options were:
Active for run all
All files included
No files excluded
Do not check file contents
Include read-only files
Include hidden files
Include system files
Backup older files (send to Recycle Bin)
All subfolders included
SyncToy run took 00:00:00:610.
Copied 5,932,607,488 bytes in 2 files in 00:00:00:610.
Bytes per second 9,725,586,045.9, files per second 3.3.
Avoided copying 135,013,767,205 bytes in 47,158 files that did not require action.
Saved approximately 00:00:13:882 by not copying all files.
Warning: 4 failures occured.
You can retry by selecting “Run” again or select “Preview” to see
the operations that remain to be performed.

The Sync operation completed successfully on folder pair ‘Work Folder Backup’ but some files were skipped. Please look at the logs for more details.
##############################################
Synctoy Failed. Exiting
##############################################
Synctoy Log ending at
21:03
08/11/2011

Migrating mail and contacts from Google Mail to Office 365

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I started to write this post in September 2008, when it was titled “Importing legacy e-mail to my Google Apps account”. A few years on and I’m in the process of dumping Google Apps in favour of Microsoft Office 365 (if only Microsoft had a suitable offering back then, I might not have had to go through this) but I never did manage to bring all my mail across from my old Exchange server onto Google.

The draft blog post that I was writing back then (which was never published) said:

First of all, I backed up my Exchange mailbox. I also backed up my Google Mail and, whilst there are various methods of doing this, I used Outlook again, this time to connect to my Google Apps account via IMAP and take a copy into a PST file. With everything safe in a format I knew how to access if necessary, I set about the import, using the Google Email Uploader to import my Outlook mailbox into my Google Apps account.

Rather than import into my live mailbox straightaway, I brought the legacy messages into a new mailbox to test the process. Then I did the same again into my real e-mail

With many years worth of e-mail to import (29097 messages), this took a long while to run (about 20 hours) but gradually I saw all the old messages appear in Google Mail, with labels used to represent the original folder names. There were a few false starts but, thankfully the Google Email Uploader picks up where it left off. It also encountered a few errors along the way but these were all messages with suspect attachments or with malformed e-mail addresses and could be safely disregarded (I still have the .PST as a backup anyway) .

Except that I never did get my mailbox to a state where I was happy it was completely migrated. Uploading mail to GMail was just too flaky; too many timeouts; not enough confidence in the integrity of my data.

In the end, I committed email suicide and started again in Google Apps Mail. I could do the same now in Office 365 but this time I don’t have an archive of the messages in a nice handy format (.PST files do have their advantages!) and anyway, I wanted to be able to bring all of my mail and contacts across into my nice new 25GB mailbox (I’ve written previously about synchronising my various calendars – although I should probably revisit that too some time)

Migrating contacts

Migrating my contacts was pretty straightforward. I’m using a small business subscription on Office 365 so I don’t have the option of Active Directory integration – the enterprise plans have this but I exported my contacts from GMail in .CSV format and brought them back into Office 365 through the Outlook web app.

One thing that’s worth noting – Google Mail has an option to merge duplicate contacts – I used this before I exported, just to keep things clean (e.g. I had some contacts with just a phone number and others with an e-mail address – now they are combined).

Migrating mail

Microsoft seems to have thought the mail migration process for Office 365 though pretty thoroughly. I don’t have space to go into the full details here, but it supports migrations from Exchange 2007 and later (using autodiscover), Exchange 2003 (manually specified settings) and IMAP servers. After a migration batch is initiated, Office 365 attempts to take a copy of the mailbox and then synchronise any changes every 24 hours until the administrator marks the migration as complete (during the migration period they should also switch over the DNS MX records for the DNS domain).

For GMail, IMAP migration is the option that’s required, together with the following settings:

Setting Value
IMAP Server imap.gmail.com
Authentication Basic
Encryption SSL
Port 993

(I only had one mailbox to migrate.)

Because GMail uses labels instead of Folders, I excluded a number of “folders” in the migration too to avoid duplicates. Unfortunately, this didn’t seem to take any effect (I guess I can always delete the offending folders from the imported data, which is all in a subfolder of [Google Mail]).

Finally, I provided a CSV file with the email address, username and password for each mailbox that I wanted to migrate.

Unfortunately, I’ve had a few migration failures – and the reports seem to suggest connectivity issues with Google (the Migration Error report includes messages like “Data migration for this mailbox failed because of delays on the remote server.” and “E-Mail Migration failed for this user because no e-mail could be downloaded for 1 hours, 20 minutes.“. Thankfully, I was able to restart the migration each time.

Monitoring the migration

Monitoring the migration is pretty straightforward as the Exchange Online portion of Office 365 gives details of current migrations. It’s also possible to control the migration from the command line. I didn’t attempt this, but I did use two commands to test connectivity and to monitor progress:

Test-MigrationServerAvailability -imap -remoteserver imap.gmail.com -port 993

and:

Get-MigrationStatus

Details of how to connect to Office365 using PowerShell can be found in my post about changing the primary email address for Office 365 users.

Points of note

I found that, whilst a migration batch was in process, I needed to wait for that batch to finish before I could load another batch of mailboxes. Also, once a particular type of migration (Exchange 2003, 2007 or IMAP) has been started, it’s not possible to creat batches of another type until the migration has been completed. Finally, completing a migration can take some time (including clean up) before it’s possible to start a new migration.

Wrap-up

It’s worth noting that Office 365 is still in beta and that any of this information could change. 24 hours seems a long while to wait between mailbox synchronisations (it would be good if this was customisable) but the most significant concern for me is the timeouts on mailbox migrations. I can rule out any local connectivity issues as I’m migrating between two cloud services (Google Apps Mail and Office 365) – but I had enough issues on my (single mailbox) migration to concern me – I wouldn’t want to be migrating hundreds of mailboxes this way. Perhaps we’ll see third party tools (e.g. from Quest Software) to assist in the migration, comparing mailboxes to see that all data has indeed been transferred.

Changing the primary email address for Office 365 users

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my recent post about configuring DNS for Office 365, I mentioned that Microsoft creates mailboxes in the form of user@subdomain.onmicrosoft.com.  I outlined the steps for adding so-called “vanity” domains, after which additional (proxy) email addresses can be specified but any outbound messages will still be sent from the onmicrosoft.com address (at least, that’s what’s used in the beta – I guess that may change later in the product’s lifecycle).

It is possible to change the primary address for a user (e.g. I send mail using an address on the markwilson.co.uk domain) but it does require the use of PowerShell.  Time to roll up your sleeves and prepare to go geek!

Connecting to Office 365 from Windows PowerShell

I was using a Windows 7 PC so I didn’t need to update any components (nor do Windows Server 2008 R2 users); however Windows XP SP3, Server 2003 SP2, Server 2008 SP1 or SP2 and Vista SP1 users will need to make sure they have the appropriate versions of Windows Powershell and Windows Remote Management installed.

Once PowerShell v2 and WinRM 2.0 are installed, the steps to connect to Office 365 were as follows:

Prompt for logon credentials and supply the appropriate username and password:

$LiveCred = Get-Credential

Create a new session:

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection

Import the session to the current PowerShell console:

Import-PSSession $Session

At this point, the session import failed for me because script execution was disabled on my machine. That was corrected using Set-ExecutionPolicy -ExecutionPolicy unrestricted (although that’s not a great idea – it would be better to use a lower level of access) – I also had to run PowerShell as an administrator to successfully apply that command.

Once scripts were enabled, I was able to import the session.

List the current mailbox addresses

It’s possible that a mailbox may have a number of proxy addresses already assigned, so this is the code that I used to list them:

$Mailbox = Get-Mailbox -Identity Mark-Wilson
$Mailbox.EmailAddresses

If you want to format the list of mailboxes as a single comma-separated line, then this might help:

ForEach ($i in $Mailbox.EmailAddresses) {Write-Host $i -NoNewline “`b, “}

(the `b is a backspace escape character.)

Set the primary email address

The primary email address is shown using an upper case SMTP: prefix whereas proxy addresses use a lower case smtp: prefix.

To change the primary email address, it’s necessary to reset all addresses on the mailbox with the Set-Mailbox cmdlet.  This is where some copying/pasting of the output from the previous command may help:

Set-Mailbox Mark-Wilson -EmailAddresses SMTP:mark@markwilson.co.uk,smtp:mark-wilson@markwilson.onmicrosoft.com,smtp:mark@markwilson.it

Disconnect the session from Office365

Once all changes have been made, it’s good practice to break down the session again:

Remove-PSSession $Session

Shooting tethered on my Nikon D700… using PowerShell

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

About this time last week, James O’Neill was explaining to me how Windows Image Acquisition (WIA) could be used to control my camera over a USB connection. I’m not sure if he told me, or if I suddenly realised, but somewhere along the way came the realisation that I could use this to take a picture – i.e. to drive the camera remotely – and James very kindly shared some Windows PowerShell commands with me.

Today, James published the results of his work, saving me a lot of research into WIA and a related subject – Picture Transfer Protocol (PTP) but, unlike James’ Pentax K7, it seems that my Nikon D700 will allow me to use this to actually take a picture (I haven’t tried on my Canon Ixus 70… with or without the CHDK).

James’ code showed me how to call WIA as a COM object:

$WIAdialog = New-Object -ComObject "WIA.CommonDialog"
$device = $WIAdialog.ShowSelectDevice()

Following this I had an object called $device that I could manipulate as I liked and $device | get-member returned the following methods and properties:

   TypeName: System.__ComObject#{3714eac4-f413-426b-b1e8-def2be99ea55}

Name           MemberType Definition
—-           ———- ———-
ExecuteCommand Method     IItem ExecuteCommand (string)
GetItem        Method     IItem GetItem (string)
Commands       Property   IDeviceCommands Commands () {get}
DeviceID       Property   string DeviceID () {get}
Events         Property   IDeviceEvents Events () {get}
Items          Property   IItems Items () {get}
Properties     Property   IProperties Properties () {get}
Type           Property   WiaDeviceType Type () {get}
WiaItem        Property   IUnknown WiaItem () {get}

$device.Properties was kind of interesting but with $device.Commands I was really getting somewhere:

CommandID                               Name          Description
———                               —-          ———–
{9B26B7B2-ACAD-11D2-A093-00C04F72DC3C}  Synchronize   Synchronize
{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}  Take Picture  Take Picture

Seeing that there was a command to take a picture got me thinking and looking back at the device methods I could see ExecuteCommand so I tried calling it:

$device.executecommand('{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}')

I was amazed to find that my camera did exactly what it was told and fired the shutter! I need to do some more testing, to see if I can control the focus, or return a live preview, etc. but controlling a remote device, over a USB connection, using nothing more than a few basic scripting commands made me feel like a real techie again (even if it was James’ code that got me started!). Who knows, I may even teach myself to code again (as I’ve threatened several times over the last few years) and write an application to remotely control my camera.

Ironically, at the start of last week I was trying to figure out how to take time-lapse photos of the extension that I’m having built on my house right now but it wasn’t software that held me back, it was practical issues like leaving a camera outside for days on end in all weathers and providing power to it. Now, if only I had a 25 metre USB cable (!), I could hook up a cheap webcam and set a script to take a picture every hour or so…

Further reading

WIA Camera Devices on MSDN.
WIA Camera support in Windows Vista (part 1 and part 2).
WIA 2.0 and digital camera interaction.

Microsoft PowerShell, VBScript and JScript Bible

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

At last night’s joint user group meeting for the Windows Server UK User Group and the Active Directory UK User Group, James O’Neill mentioned that the book he has co-authored (Microsoft PowerShell, VBScript and JScript Bible, published by John Wiley and sons) goes on sale today.

I haven’t had the chance to review it yet but knowing how immersed James is in PowerShell (and that he wrote the PowerShell sections), I would suggest this might be worth considering if you are looking for a good reference book.

Installing WordPress on a Mac: the aftermath (phpMyAdmin, databases, themes, plugins and fixing the tags)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I wrote about installing WordPress on a Mac but I wanted to follow that up with a post on what happened next.

Installing phpMyAdmin

Installing WordPress is all very well, but it’s good to be able to manipulate the database. The command line mysql tools would have worked but a graphical interface (even an ugly one with bizarre icons) is often easier, so I installed phpMyAdmin as described by Nino Müller:

  • Download the latest version of phpMyAdmin (I used v3.1.2).
  • Extract the files to ~/Sites/phpmyadmin.
  • Copy config.sample.inc.php to config.inc.php and edit the Blowfish secret (the line which reads $cfg['blowfish_secret'] = ''; .
  • Navigate to http://localhost/~username/phpmyadmin and login.

Unfortunately, after attempting to logon, I was presented with an error message:

#2002 – The server is not responding (or the local MySQL server’s socket is not correctly configured)

Following The Vince Wadhwani’s advice at his Hackido site I typed mysqlconfig --socket to verify the socket is in use for MySQL (e.g. /tmp/mysql.sock) but I couldn’t find a config.default.php file (or the equivalent entry in my config.inc.php file) to adjust the socket. A post at Friends of ED suggested creating a symbolic link for the socket and it seemed to work for me:

sudo mkdir /var/mysql
sudo ln -s /tmp/mysql.sock /var/mysql/mysql.sock

Following this I could log into phpMyAdmin (although I still have a warning message that phpMyAdmin cannot load the mcrypt extension – this doesn’t seem to be causing any problems though).

Importing a WordPress database

Using phpMyAdmin on my web host’s server, I exported the database from the live copy of markwilson.it and attempted to import it on the Mac. Unfortunately this didn’t work as my database was too large for PHP to upload – as confirmed by creating a file named ~/Sites/phpinfo.php containing <?php phpinfo(); ?>, then viewing it in a browser (http://localhost/~username/phpinfo.php) and looking for the upload_max_filesize variable.

Rather than messing around with my PHP configuration, I googled for the necessary commands and typed:

/user/local/mysql/bin/mysql -u root -p
drop database wordpressdatabasename;
source ./Downloads/wordpressdatabasename.sql
quit

At this point, the local copy of WordPress was running on the live database, but the links were all to the live site, so I used phpMyAdmin to edit the site URL in the wp_options table, changing it from http://www.markwilson.co.uk/blog to http://localhost/~username/blog.

Because the live copy of the site is on an old version of WordPress, browsing to http://localhost/~username/blog/wp-admin prompted me to upgrade the database, after which I could log in and edit the site settings (e.g. the blog address).

WordPress database upgrade

WordPress 2.7 Dashboard

Restoring the theme and plugins

At this point, although WordPress was running on a local copy of my live database, the normal theme was missing and the plugins were disabled (as they were not present). I copied them from the live server and, after selecting the theme and enabling the plugins saw something approaching normality, although there were a few plugins that required updating and I still couldn’t get rid of a particularly annoying database error:

WordPress database error: [Table ‘wordpressdatabasename.wp_categories’ doesn’t exist]
SELECT cat_name FROM wp_categories ORDER BY cat_name ASC

By disabling plugins one by one (I could also have grepped /~Sites/blog/wp-admin/wp-content/plugins for wp_categories), I found that the issue was in the Bad Behavior plugin that I use to ban IP addresses known to send spam.

Moving from categories to tags

When I first moved this site to WordPress, I used Dean Robinson’s Ultimate Category Cloud plugin to provide a tag cloud (at that time WordPress did not support tags). Over time, that because unmanageable and, although I still need to define a decent taxonomy for the site, the categories still have some value if they are converted to tags.

Over some tapas and drinks in the pub, my friend Alex Coles at ascomi and I had a look at the database structure and Alex came up with a quick SQL query to run against my WordPress database:

UPDATE wp_term_taxonomy SET taxonomy='post_tag' WHERE taxonomy='category'

That converted all of my categories to tags, but there were some I manually edited to return to categories (General – which was once called Uncategorised – and Site Notices) but for some reason, all the posts were recorded in a category of Uncatagorized. Some late night PHP coding (reminiscent of many nights at Uni’ – as Steve will no doubt remember – although in those days it was Modula-2, C, C++ and COBOL) resulted in a script to run through the database, identify all posts with a category of 17 (which in my database is the default category of “General”), put the post numbers into an array and then explicitly set the category as required, taking a note of the ones which have been changed so that they can be ignored from that point on:

<html>
<head>
</head>

<body>
<?php

// Connect to the WordPress database
$db_hostname = "localhost:/tmp/mysql.sock";
$db_username = "wordpressuser";
$db_password = "wordpresspassword";
$db_connect = mysql_connect($db_hostname, $db_username, $db_password) or die("Unable to connect to server.");
$db = mysql_select_db("wordpressdatabasename",$db_connect);

// Retrieve all objects including a category with the value of 17 (my default category)
$hascat = mysql_query("SELECT object_id FROM wp_term_relationships WHERE term_taxonomy_id = '17' ORDER BY object_id");
echo '<p>'.mysql_num_rows($hascat).' rows found with category</p>';

$correct_ids = array();

// Build a PHP array (not a MySQL array) containing the relevant object IDs for later comparison
while ($row = mysql_fetch_array($hascat))
{
$correct_ids[] = $row[0];
}
echo '<p>Array built. Length is '.count($correct_ids).'. First ID is '.$correct_ids[0].'.</p>';

// Retrieve every object
$result = mysql_query("SELECT * FROM wp_term_relationships ORDER BY object_id");
echo '<p>'.mysql_num_rows($result).' rows found total</p>';

// The magic bit!
// If the object is not in our previous array (i.e. the category is not 17)
// then add it to category 17 and put it in the array so it won't get added repeatedly
while ($row = mysql_fetch_array($result))
{
if (!in_array($row['object_id'],$correct_ids))
{
// Add to category 17
mysql_query("INSERT INTO wp_term_relationships (object_id,term_taxonomy_id,term_order) VALUES ('".$row['object_id']."','17','0')");
echo '<p>Alter database entry for object '.$row['object_id'].'.</p>';
// Add to the array so it is not flagged again
$correct_ids[]=$row['object_id'];
}
else echo '<p style="color:white; background-color:black">'.$row['object_id'].' ignored.</p>';
}

?>
</body>
</html>

Remaining issues

Permalinks don’t seem to work – it seems that Mac OS X does not support using .htaccess files by default and, whilst it’s possible to modify for the root folder it doesn’t seem to work for individual user sites. I’ll do some more digging and see if I can find a fix for that one.

WordPress also features the ability to automatically update plugins (and itself), but my installation is falling back to FTP access when I try to update it and making it work appears to be non-trivial. Again, I’ll take another look when I have more time.

Copying Outlook profiles, looking up SIDs and rediscovering Outlook 2007’s autodiscovery functionality

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the benefits of not being so “hands on” these days is not having to mess around with Outlook profiles but, after joining my Windows 7 workstation to my employer’s Active Directory domain last week, I was faced with the prospect of migrating certain settings between the profile for a local user account and the profile for my cached domain logon. It should have been easy to set up a new profile, but for some reason I couldn’t get Outlook to connect to my server, so I decided to copy the working profile from the local user account.

There are various ways to do export Outlook account information but I decided to fall back to direct registry manipulation, exporting the registry values at HKEY_USERS\SID\Software\Microsoft\Windows NT\CurrentVersion\Windows Messaging Subsystem\Profiles\Outlook (thanks to Dave Saxon for that tip), then massaging the resulting .reg file to change the SID and re-importing.

Incidentally, to find out which SID relates to which username, I followed a Microsoft Scripting Guys article to run the following VBScript:

strComputer = "."
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set objAccount = objWMIService.Get _
("Win32_UserAccount.Name='username',Domain='computername'")
Wscript.Echo objAccount.SID

The main problem with this method was that my profile included an offline folder file (.OST) which was not accessible for my domain user account. It did, however, allow me to verify the settings that were required and to attempt to set up an new profile.

As it happens, even that was unsuccessful, so I tried the Repair button in the Outlook account settings, which invoked Outlook 2007’s autodiscovery functionality. If only I’d thought to use that in the first place… Still, at least it exposed me to the workings of an Outlook profile.

Incidentally, whilst researching this post, I came across some more information that might be useful if you’re trying to move Outlook data around.

There are two key locations containing many of the Outlook data files:

  • %userprofile%\AppData\Roaming\Microsoft\Outlook (also accessible at %userprofile%\Application Data\Microsoft\Outlook)
  • %userprofile%\AppData\Local\Microsoft\Outlook (also accessible at %userprofile%\Local Settings\Application Data\Microsoft\Outlook)

Some of the useful files (which may exist outside those two folders, and which may vary according to the version of Outlook) include:

  • profilename.NK2 (or .NICK) – nickname files with auto completion information.
  • profilename.xml – navigation pane settings.
  • .PST files – personal folders.
  • archive.pst – archived data (my personal preference is to turn off auto Archive and manage it manually).
  • .PAB files – personal address book files.
  • .FAV files – Outlook Bar shortcuts.
  • .RWZ files – Rules Wizard rules.
  • .DIC files – dictionary files.
  • views.dat – customized system folder views.
  • outcmd.dat – customised toolbar settings.
  • extend.dat – references to extensions (add-ins).
  • .sharing.xml.obi files – RSS subscription names.
  • .xml.kfl – RSS known feed list.

Signatures, Stationery and Templates have their own folders under %userprofile%\AppData\Local\Microsoft:

  • \Signatures (.RTF, .HTM, and .TXT files).
  • \Stationary (.HTM files).
  • \Templates (.OFT files).

You may also find some send and receive settings (.SRS) files. These are workstation specific and appear to be created on the first run of Outlook for each messaging profile. Consequently they do not need to be migrated.

Similarly, offline address book (.OAB) files should be downloaded from the server.

Finally, just as I was about to post this, I found an Outlook Backup Tutorial covering both Outlook and Outlook Express, which might be useful if you want to back up just your Outlook data (I tend to back up the whole machine).

A quick look at Windows PowerShell 2

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Richard Siddaway‘s recent TechNet presentation (around the datacentre in 80 scripts) was a first opportunity for me to have a look at what’s coming in the next version of Windows PowerShell.

I’ve written previously about PowerShell (as an introduction to the concept and from an IT administrator standpoint) but, just to summarise, in a logical diagram of the Windows Server System, PowerShell would sit between Windows Server and the rest of the Windows Server System as the integration and automation engine (and PowerShell support is part of Microsoft’s common engineering criteria for 2009 – it’s already widely used by Exchange Server, SQL Server and by recent System Center products – and there is growing third party support too).

Whilst PowerShell is really an automation engine, it’s commonly expressed as a command shell and scripting language which underlies the graphical user interface. PowerShell is based on the Microsoft.NET Framework but does not require a knowledge of .NET programming. As for whether it will eventually replace cmd.exe as the CLI in Windows – maybe one day but not for a while yet (maybe not at all – Unix has several shells to chose from for administration).

Key PowerShell features include:

  • cmdlets – small piece of functionality which perform a single function (and use a verb-noun naming structure).
  • Providers -functaionality to open a data store as if it were a file system (e.g. certificate store, registry, etc.).
  • Extensiblity – there are around 130 cmdlets in the PowerShell base and functionality can be added as required (Exchange, SQL, etc.) in the same way that Microsoft Management Consoles are built up from various snap-ins. A Windows Installer file registers a DLL and PowerShell accesses it as a snap-in (using the add-pssnapin command in the profile) and from that point on the additional functionality is available in PowerShell.
  • Pipeline – the pipeline is used to pass .NET objects between cmdlets (non-programmers – think of objects as “blobs of stuff” with methods and properties to do things with them!)

Windows PowerShell was originally released in November 2006 and was finally included within Windows Server 2008 this year (it wasn’t ready in time for Vista). At the time of writing, PowerShell 2.0 is still a community technical preview (there have been two releases – CTP and CTP2) so there may be changes before release, but some of the improvements we can expect to see (and this list is not exhaustive), based on CTP2, are:

  • Remoting. New remoting capabilities require PowerShell to be installed on both the client and the server and use Windows Remote Management (WinRM), which is based on WS-Management (check that winrm is running with get-service winrm). At present, remoting requires administrator rights for both configuration and use.
  • Jobs. PowerShell jobs run asynchronously and can be started using the psjob cmdlets (get-command *.psjob to list available cmdlets), some cmdlets support the -asjob parameter (get-help * -parameter asjob) where that option is provided.
  • Runspaces. Jobs can also be used with PowerShell’s remoting capabilities in RunSpaces, which create a persistent connection between the local and remote machines in order to speed up the response. Remote commands are invoked using invoke-command. For example, to create a runspace and execute a script as a job, I might use the following code:
    $r = new-runspace -computername mycomputer
    invoke-command -runspace $r -scriptblock {remotescript} -asjob

    after which I could use get-psjob and other cmdlets to manipulate the job (e.g. check on progress, receive data, etc.).
  • Script cmdlets. Cmdlets can now be written in PowerShell, rather than being compiled from a .NET language.
  • Transactions. In the same manner as SQL Server, Exchange Server and Active Directory apply a database transaction-logging mechanism, PowerShell now has the potential for transaction-based processing (i.e. carry out an action, if it completes then OK, if not then roll back). This functionality is implemented at the provider level so is not universally available (at the time of writing, only the registry supports this).
  • Graphical PowerShell. A new tool, with script editor, interactive prompt and results pane.
  • WMI. Improved support for Windows management instrumentation (WMI) through type accelerators ([WMI], [WMIClass] and [WMISearcher]), the ability to pass credentials with get-wmiobject and new wmi-focused cmdlets (invoke-wmimethod, set-wmiinstance, remove-wmiobject). In a simple example to launch a process using WMI I might use the following code:
    $c = [WMIClass]”Win32_Process”
    $c.create(“win32program.exe”)
    and to clear up afterwards I might use:
    get-wmiobject -class win32_process -Filter "Name='win32program.exe'" | remove-wmiobject

It should be stressed that PowerShell 2.0 is still under development (it’s a community technology preview – not even a beta) and that things may change. It may also break things – there are also some naming clashes (e.g. with the PowerShell Community Extensions), new keywords (e.g. data) and it’s more complicated than the original version. Even so, PowerShell 1.0 already has tremendous potential and I’d be using it more often if I was doing more administration work. As more products use PowerShell for automation then knowing how to use it will become an ever-more important skill for Windows administrators – version 2 is definitely worth a look and if you want to know more about PowerShell then I recommend checking out the PowerShell UK user group and the PowerShell team blog.

PowerShell running on server core (without resorting to application virtualisation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

PowerShell evangelist (and Microsoft deployment guru) David Saxon dropped me a note this morning to let me know that Quest Software’s Dmitry Sotnikov has got PowerShell running on Server Core.

Nice work Dmitry. It’s not a supported configuration (as Jeffrey Snover notes in his post on the PowerShell Team blog) but something that people have been wanting to see for a while now.

(Aaron Parker managed to get this working another way – using application virtualisation)