Tag: Security

  • Is Apple really encouraging me to click a link that could go anywhere?

    Earlier today I was installing an app on my iPad and the iTunes store wanted some “additional security details”.  I set up some questions and answers, feeling reasonably confident that, as I was using the App Store app, the details were actually being taken by Apple.  In addition it requested an optional email address for account recovery but it wouldn’t let me use my normal email address because that’s also used for my Apple ID (so why does that make it invalid for account recovery?)

    I supplied a different email address and the App Store accepted the “additional security details” and let me complete my purchase…

    Then, I got this email:

    From: Apple [appleid@id.apple.com]
    Sent: 27 April 2012 14:08
    To: Mark Wilson
    Subject: Please verify that we have the right address for you

    Thank you.

    You’ve taken the added security step and provided a rescue email address. Now all you need to do is verify that it belongs to you.

    The rescue email address that you gave us is [email address removed] . Just click the link below to verify, sign in using your Apple ID and password, then follow the prompts.

    Verify Now >

    The rescue email address is dedicated to your security and allows Apple to get in touch if any account questions come up, such as the need to reset your password or change your security questions. As promised, Apple will never send any announcements or marketing messages to this address.

    When using Apple products and services, you’ll still sign in with your primary email address as your Apple ID.

    It’s about protecting your identity. 
    Just so you know, Apple sends out an email whenever someone adds or changes a rescue email address associated with an existing Apple ID. If you received this email in error, don’t worry. It’s likely someone just mistyped their own email address when creating a new Apple ID.

    If you have questions or need help, visit the Apple ID Support site.

    Thanks again,

    Apple Support

    (The actual email was prettier than this, for example it contained graphics with Apple logos, and an Apple footer, but the words are reproduced here almost verbatim – in addition to removing my email address, I’ve also edited the verification link to make it invalid, but otherwise that’s the way it was presented).

    This email annoys me for two reasons.

    1. I hate security theatre. Real security should involve something I have and something I know. All of Apple’s questions are just about something I know. In effect, it’s just multiple passwords…
    2. Apple have sent me an email asking me to confirm an email address but with no personally identifying information (no “Dear Mark”; no “Dear Mr Wilson”, nothing that confirms my relationship with them), asking me to click a link that could go anywhere. If this were from PayPal we’d be saying “noooo – don’t do it, it’s a phishing attack!”.

    I was very careful about checking out the link in the email and it does appear to have been genuine, but Apple has an enormous market of largely unsuspecting and trusting consumers, not all of whom could be described as “IT literate”. By not encouraging any from of “safe computing” Apple is setting a very bad example – and is re-enforcing practices that consumers should be avoiding.  Microsoft has some good advice on their site for symptoms of phishing and several of the symptoms are present in the email I received from Apple.

    Earlier today I dismissed an article that quoted Eugene Kaspersky as saying Apple was 10 years behind Microsoft in terms of security [awareness] – too many vested interests at play, I thought. On the other hand, if this afternoon’s email really does represent Apple’s corporate culture towards security, they do have some serious catching up to do…

  • Bring your own… or use what you are told?

    A few days ago, I read an article about the risks presented by IT consumerisation. It rang alarm bells with me because, whilst the premise is sound (there are risks, some serious ones, and they need to be mitigated), the focus seemed to be on controlling data leakage by restricting access to social media and locking down device functionality (restricting USB ports, etc.). Whilst that was once an accepted model, I have to question if UWYT (use what you are told) is really the approach we should be taking in this day and age?

    One of the key topics within the overal consumerisation theme is concerned with “bring your own” (BYO) device models. I recently wrote a white paper on this topic (a condensed “insight and opinion” view is also available) but, in summary, BYO offers IT departments an opportunity to provide consumer-like services to their customers – i.e. business end users.

    In a recent dialogue on Twitter, one of my contacts was suggesting that Fortune 500 companies won’t go for BYO.  But the tide does seem to be turning and there are significant enterprises who are seriously considering it. I’ve been involved in several discussions over recent weeks and I’ve even seen articles in mainstream press about BYO adoption (for example, Qantas has publicly announced plans to allow up to 35,000 employees to connect their own devices to the corporate network). Interestingly, both those links are to Australian publications – maybe we’re just a little more conservative over here?

    Of course, there are hurdles to cross (particularly around manageability and security) and it’s not about undoing the work put into managing “standard operating environments” but about recognising how to build flexibility into our infrastructure and open up access to what business end users really need – information!

    We need to think about device ownership too and, in particular, about whose data resides where. Indeed, one of the best articles I’ve read on the topic was Art Witmann’s suggestion that a BYO strategy should start with data-centric security, including this memorable quote:

    “Understandable or not, if ‘your device is now our device’ is the approach your team is taking, you need to rethink things”

    Virtualisation can help with the transition, as can digital rights management. Ultimately we need to re-draw our boundaries and we may find ourselves in a place where the office network is considered “dirty” (just as the coffee shop Wi-Fi is today) and we access services (secured at the application or, better still, at the data layer) rather than concerning ourselves with device or technology-dependant offerings.

    Putting myself in a customer’s shoes for a moment, I expect that I’d be asking if Fujitsu is following a BYO model and the answer is both “yes”, and “no”. As a device manufacturer it presents some image problems if our people are using other vendors’ equipment so, here in the UK and Ireland, our PCs are still provided by a central IT function. Having said that, there are some choices with a catalogue to select from (based on defined eligibility criteria [- a choose your own device scheme]). We also operate a BYO scheme for mobile devices, based on [Fujitsu’s] Managed Mobile service.

    So we can see that BYO is not an all-or-nothing solution. And, whilst I’ve only scraped the surface here, it does need to be supported with appropriate changes to policies (not just IT policies either – there are legal, financial and human resources issues to address too).

    To me it seems that ignoring consumerisation is a perilous path – it’s happening and if senior IT leaders are unable to support it, they may well find themselves bypassed. Of course, not every employee is a “knowledge worker” and there will be groups for whom access to social media (or even access to the Internet) or the ability to use their own device is not appropriate. For many others though, the advantages of “IT as a service” may be significant and far-reaching.

    [This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

  • There was no “phone hacking” – but have you changed your voicemail codes?

    Before I start, let’s get one thing straight – I’m not condoning the actions of the News of the World, or any other medium that is illegally/illicitly accessing people’s personal information. And I added my name to more than one anti-Murdoch/News International petition this week. I’m appalled by some of what’s come out over the last few days but this is a technology blog – and this post is about technology, not politics or crime syndicates news organisations [source: Google/Wikipedia].

    I just want to highlight something that was posted on Phil Hendren (aka Dizzy)’s Dizzy Thinks blog last year:

    “* Calling someone’s mobile, waiting for it to go to voicemail and then entering their four digit pin (0000) is not hacking. Hacking is about circumventing security, not being presented with [a security check] and passing [it].

    ** Calling someone’s mobile, waiting for it to go to voicemail and then entering their four digit pin (0000) is not tapping. Tapping is the covert act of real-time interception of active communication links.”

    But we can all protect ourselves – David Rogers explains how on Sophos’ Naked Security blog. So I urge everyone to check their voicemail access code(s), and change it/them to something non-default. After all, you wouldn’t make your email logon credentials consist of your email address as a username and the word “password” as your password would you?

  • Yikes! My computer can tell websites where I live (thanks to Google)

    A few months ago there was a furor as angry Facebook users rallied against the social networking site’s approach to sharing our personal data.  Some people even closed their accounts but at least Facebook’s users choose the information that they post on the site.  OK, so I guess someone else may tag me in an image, but it’s basically up to me to decide whether I want something to be made available – and I can always use fake information if I choose to (I don’t – information like my date of birth, place of birth, and my Mother’s maiden name is all publicly available from government sources, so why bother to hide it?).

    Over the last couple of weeks though, I’ve been hearing about Google being able to geolocate a device based on information that their Streetview cars collected.  Not the Wi-Fi traffic that was collected “by mistake” but information collected about Wi-Fi networks in a given neighbourhood used to create a geolocation database.  Now, I don’t really mind that Google has a picture of my house on Streetview… although we were having building work done at the time, so the presence of a builder’s skip on my drive does drag down the impression of my area a little!  What I was shocked to find was that Firefox users can access this database to find out quite a lot about the location of my network (indeed, any browser that supports the Geolocation API can) – in my case it’s only accurate to within about 30-50 metres, but that’s pretty close! I didn’t give consent for Google to collect this – in effect they have been “wardriving” the streets of Britain (and elsewhere).  And if you’re thinking “thats OK, my Wi-Fi is locked down” well, so is mine – I use WPA2 and only allow certain MAC addresses to connect but the very existence of the Wi-Fi access point provides some basic information to clients.

    Whilst I’m not entirely happy that Google has collected this information, it’s been done now, and being able to geolocate myself could be handy – particularly as PCs generally don’t have GPS hardware and location-based services will become increasingly prevalent over the coming years.  In addition, Firefox asks for my consent before returning the information required for the database lookup (that’s a requirement of the W3C’s Geolocation API)  and it’s possible to turn geolocation off in Firefox (presumably it’s as simple in other browsers too).

    What’s a little worrying is that a malicious website can grab the MAC address of a user’s router, after which it’s just a simple API call to find out where the user is (as demonstrated at the recent Black Hat conference).  The privacy and security implications of this are quite alarming!

    One thing’s for sure: Internet privacy is an oxymoron.

  • How much is your personal information worth?

    We’ve all heard the horror stories about personal information, such as credit card details, falling into the wrong hands and, thankfully, in many cases the banking system limits the damage but it’s a growing problem.

    Now Symantec have issued the results of some research which shows that this information may be sold on the black market for as little as a few pence as cyber criminals use generous retail promotions like bulk buying and “try before you buy” to sell consumer information and credit card details to other criminals. According to the Internet Security Threat Report 2009 e-mail addresses and accounts are traded among criminals from as little as five pence to as much as £60, with a full identity going for around £45. I don’t know about you but I find those figures to be alarmingly low – until I read on and discover that, according to Symantec, more than 10 million stolen identities are traded each year on the black market.

    The online black market is booming compared to real-world criminal activity. It is more profitable, harder to prosecute and provides anonymity. Whereas on the streets of London Metropolitan Police figures indicate that a crime is committed every 37 seconds, an identity is stolen online every three seconds.

    So how does this happen? Well, much of the information is harvested using malware on our PCs with many victims unaware that their computer is a “zombie” acting as part of the botnets that are the main source of online fraud, spam and other scams on the Internet today. In addition, we are putting increasing volumes of personal information onto the web through social networking. Meanwhile, action against criminals is hampered by the fact that national laws typically lag behind technological advances and the fact that the Internet is a global network and so requires co-operation from multiple law enforcement agencies.

    So, what can we do about it? Well, much of the information in this blog post comes from Symantec/Norton and it’s no surprise that they would like us to buy the latest version of their security suite but, even if you use another reputable company’s security products, it’s worth checking out the advice on their Every Click Matters site including a victim assessment tool that helps you assess your risk (and black market worth) and 10 simple steps we can all take to stay safe. Maybe you, as a reader of this blog, know what to do – put it may be worth highlighting the advice to less-technical friends and family.

  • What does your digital tattoo say about you?

    We’ve all heard of employers Googling prospective (and current) employees to check out their history/online status and there’s the recent story that went viral about someone who forgot she’d added her Manager on Facebook before bitching about her job (needless to say she didn’t have a job when he read what she had to say). Then there’s the story of the Australian call centre worker who was too drunk to work and pulled a sickie… only to be busted on Facebook.

    Maybe these things sound like something that happens to someone else – none of us would be that stupid, would we?

    Actually, it can happens to the best of us, although maybe not in quite an extreme manner. I’ve become a bit of a Twitter evangelist at work (David Cameron might say that makes me something else…) and, after one of my colleagues suggested that my new manager check out my feed as an example of effective technical knowledge sharing, I hastily checked for any potential lapses of judgment. I did actually remove an update that was probably OK, but I didn’t want to chance it.

    Generally I’m pretty careful about what I say online. I never name my family members or give out my address, family photos are only available to a select group of people, I don’t often mention the name of the town where I live (although this blog is geotagged) and I am very careful to avoid mixing the details of my day job too closely with this website (technical knowledge sharing is fine… company, partner and customer details are not).

    Digital tattooIt seems that there is a generation of Internet users who are a little more blasé though and Symantec are advising consumers against the dangers of sharing their personal information on the ‘net, referring to a “digital tattoo” (described as “the amount of personal information which can be easily found through search engines by a potential or current employer, friends or acquaintances, or anyone else who has malicious intent”).

    The digital tattoo term seems particularly apt because there is a misconception that, once deleted, information is removed from the ‘net but that is rarely the case. Just like a physical (skin) tattoo, removing a digital tattoo can be extremely difficult with the effects including hindered job prospects and identity theft. Symantec’s survey revealed that 31% of under-25s would like to erase some of their personal information online. Nearly two-thirds have uploaded personal photographs and private details such as postcodes (79%) and phone numbers (48%) but, worryingly, one-in-ten under-25s have put their bank details online (not including online purchases) and one-in-20 have even noted their passport number!

    Of course, there are positive sides to social networking – I personally have benefited from an improved relationship with several technology vendors as a result of this blog/my Twitter feed and it’s also helped me to expand my professional network (backed up with sites like LinkedIn and, to a lesser extent, Facebook). What seems clear is that there is a balance to be struck and today’s young people have clearly not been sufficiently educated about the dangers of life in an online society.

  • Computer clamping

    There’s just been a bit of entertainment in the office where I’m working today as one of my colleagues accidentally locked his computer to the desk!

    Having seen a spare desk with a full size keyboard, mouse, screen and port replicator/docking station, he hooked up his laptop and started working – only to find that the docking station had a security device attached and he couldn’t release the PC without a key.  Furthermore, the keyholder had already gone home and wasn’t answering his mobile phone.

    To be honest, I would have fallen for this myself – I didn’t know that the security device worked that way (I would have assumed that I needed to lock the PC in place with a key and not just attach it to a locked docking station) but, for those with a permanent desk and a docking station in a predominantly hot-desking environment, this could become a lucrative sideline in computer clamping – the principle being that if someone uses your desk they can pay to have you come along with a key and unlock their laptop!

  • Securely wiping hard disks using Windows

    My blog posts might be a bit sporadic over the next couple of weeks – I’m trying to squeeze the proverbial quart into a pint pot (in terms of my available time) and am cramming like crazy to get ready for my MCSE to MCITP upgrade exams.

    I’m combining this Windows Server 2008 exam cramming with a review of John Savill’s Complete Guide to Windows Server 2008 and I hope to publish my review of that book soon afterwards.

    One of the tips I picked up from the book this morning as I tried to learn as much as I could about Bitlocker drive encryption in an hour, was John’s tip for securely wiping hard drives using a couple of Windows commands:

    format driveletter: /fs:ntfs /x

    will force a dismount if required and reformat the drive, using NTFS.

    cipher /w:driveletter:

    will remove all data from the unused disk space on the chosen drive.

    I don’t know how this compares with third party products that might be used for this function but I certainly thought it was a useful thing to know. This is not new to Windows Server 2008 either – it’s certainly available as far back as Windows XP and possibly further.

    For more tips like this, check out the NTFAQ or John’s site at Savilltech.com.

  • Identity and security developments at Microsoft

    In amongst all the exciting new product announcements for new Windows releases and cloud computing platforms it’s all too easy to miss out on some of the core infrastructure enhancements that Microsoft is making. Last week I got the chance to catch up with Joel Sider from Microsoft’s Identity and Security group – a new organisation at Microsoft formed to address the issues of identity and security (which are really two sides of the same coin) and which, until recently have been treated as individual point solutions.

    Joel explained to me that, with a single business group and a single engineering group, Microsoft is able to focus on the complete product stack, from System Center and Identity Lifecycle Manager (ILM – formerly MIIS), through Forefront security to the Windows platform, including Active Directory, Rights Management Services (RMS) and Network Access Protection (NAP).

    Two of the products under the umbrella of the identity and security group have been in the news recently:

    • A release candidate of Identity Lifecycle Manager “2” is available now. Due for final release in the first half of 2009, ILM “2” provides self-service for employees, enhanced administration and automation for IT professionals, and extensibility for developers. In developing this product, Microsoft’s focus was in allowing IT departments to set policies for access, empowering end users and knowledge workers to perform actions and tasks (e.g. reset passwords, manage group membership, etc.). Until the release of this product, such actions would have required the use of third party products (e.g. Quest Active Roles Server and unlike MIIS, which was powerful but had a limited user interface, the focus with ILM is on providing an intuitive management interface and self service capabilities whilst still allowing extensibility (e.g. for audit and compliance purposes). ILM uses a concept of sets to group objects (e.g. “All people”) and then a workflow (authentication, authorisation, or action) may be applied to complete a number of steps (e.g. in a password reset scenario to answer a number of security questions; or approving membership of a group and sending out a notification in a group membership scenario).
    • Intelligent Application Gateway (IAG) service pack 2 is also due for release shortly. Originally available only in hardware appliance form, the former Whale Communications product can now be run as a Hyper-V virtual machine to reduce costs and increase flexibility in the infrastructure. In addition, IAG supports access from non-Microsoft browsers (e.g. Firefox) and platforms (i.e. users running Linux and Mac OS X) and has additional optimisers for recently released applications. (For those who are unaware of IAG’s capabilities, it provides granular access to specific applications via an SSL VPN with support for almost any application but optimisations for those which it has an awareness of – that’s the “intelligent” part of IAG).

    Other significant developments taking place within the identity and security group include: the Windows Azure .NET Identity Framework (codenamed Geneva) which provides a Microsoft.NET identity access control service; Windows Cardspace; and the Forefront integrated security product (codenamed Stirling) which will combine the various disparate Forefront components.

    From my perspective, I’m really encouraged to see Microsoft working to provide a more focused approach. As I’ve written before, many of Microsoft’s identity and security products are the result of acquisitions and, whilst it’s important not to lose the features and functionality that made these products successful in the first place, they also need to be tightly integrated to avoid the inevitable confusion caused by feature overlap and conflicting goals. It seems to me that Microsoft is working towards providing a sensible and logical identity and security portfolio for customers and partners.

  • Trusting a self-signed certificate in Windows

    All good SSL certificates should come from a well-known certification authority – right? Not necessarily (as Alun Jones explains in defence of the self-signed certificate).

    I have a number of devices at home that I access over HTTPS and for which the certificates are not signed by Verisign, Thawte, or any of the other common providers. And, whilst I could get a free or inexpensive certificate for these devices, why bother when only I need to access them – and I do trust the self-signed cert!

    A case in point is the administration page for my NetGear ReadyNAS – this post describes how I got around it with Internet Explorer (IE) but the principle is the same for any self-signed certificate.

    First of all, I added the address to my trusted sites list. As the ReadyNAS FAQ describes, this is necessary on Windows Vista in order to present the option to install the certificate and the same applies on my Windows Server 2008 system. Adding the site to the trusted sites list won’t stop IE from blocking navigation though, telling me that:

    There is a problem with this website’s security certificate.

    The security certificate presented by this website was not issued by a trusted certificate authority.

    Security certificates problems may indicate an attempt to fool you or intercept any data you send to the server.

    We recommend that you close this webpage and do not continue to this website.

    Fair enough – but I do trust this site, so I clicked the link to continue to the website regardless of Microsoft’s warning. So, IE gave me another security warning:

    Security Warning

    The current webpage is trying to open a site in your Trusted sites list. Do you want to allow this?

    Current site: res://ieframe.dll
    Trusted site: https://
    mydeviceurl

    Thank you IE… but yes, that’s why I clicked the link (I know, we have to protect users from themselves sometimes… but the chances are that they won’t understand this second warning and will just click the yes button anyway). After clicking yes to acknowledge the warning (which was a conscious choice!) I could authenticate and access the website.

    Two warnings every time I access a site is an inconvenience, so I viewed the certificate details and clicked the button to install the certificate (if the button is not visible, check the status bar to see that IE has recognised the site as from the Trusted Sites security zone). This will launch the Certificate Import Wizard but it’s not sufficient to select the defaults – the certificate must be placed in the Trusted Root Certification Authorities store, which will present another warning:

    Security Warning

    You are about to install a certificate from a certification authority (CA) claiming to represent:

    mydeviceurl

    Windows cannot validate that the certificate is actually from “certificateissuer“. You should confirm its origin by contacting “certificateissuer“. The following number will assist you in this process:

    Thumbprint (sha1): thumbprint

    Warning:

    If you install this root certificate, Windows will automatically trust any certificate issued by this CA. Installing a certificate with an unconfirmed thumbprint is a security risk. If you click “Yes” you acknowledge this risk.

    Do you want to install this certificate?

    Yes please! After successfully importing the certificate and restarting my browser, I could go straight to the page I wanted with no warnings – just the expected authentication prompt.

    Incidentally, although I used Internet Explorer (version 8 beta) to work through this, once the certificate is in the store, then all browsers any browser that uses the certificate store in Windows should act in the same manner (the certificate store is not browser-specific some browsers, e.g. Firefox, implement their own certificate store). To test this, I fired up Google Chrome and it was able to access the site I had just trusted with no issue but if I went to another, untrusted, address with a self-signed certfiicate (e.g. my wireless access point), Chrome told me that:

    The site’s security certificate is not trusted!

    You attempted to reach mydeviceurl but the server presented a certificate issued by an entity that is not trusted by your computer’s operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site.

    Chrome also has some excellent text at a link labelled “help me understand” which clearly explains the problem. Unfortunately, although Chrome exposes Windows certificate management (in the options, on the under the hood page, under security), it doesn’t allow addition a site to the trusted sites zone (which is an IE concept) – and that means the option to install the cerficate is not available in Chrome. In imagine it’s similar in Firefox or Opera (or Safari – although I’m not sure who would actually want to run Safari on Windows).

    Before signing off, I’ll mention that problems may also occur if the certificate is signed with invalid details – for example the certificate on my wireless access point applies to another URL (www.netgear.com) and, as that’s not the address I use to access the device, that certificate will still be invalid. The only way around a problem like this is to install another, valid, certificate (self-signed or otherwise).