Preparation notes for Microsoft exam 70-534: Architecting Microsoft Azure Solutions

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been preparing for Microsoft exam 70-534: Architecting Microsoft Azure Solutions. At the time of writing, I haven’t yet sat the exam (so this post doesn’t breach any NDA) but the notes that follow were taken as I studied.

Resources I used included:

  • Microsoft Association of Practicing Architects (MAPA) bootcamp (unfortunately the delivery suffered from issues with the streaming media platform and the practical labs are difficult to follow, partly due to changes in the platform).
  • Hands-on time with Azure – though the exam is still mostly based on the old Classic/Azure Service Manager (ASM) model, so I found myself going back to learn things in ASM that I do differently under Azure Resource Manager (ARM).
  • The Microsoft Press exam preparation book, which contains a lot more detail and is pretty readable (or it would be if I wasn’t trying to read it in PDF form – sometimes paperback books are better for flicking back and forward!).
  • A free Azure subscription (either sign up for a one-off £125 credit for a month, or you can get £20 each month for 12 months through Visual Studio Dev Essentials).

Other potentially-useful references include:

The rest of this post contains my study notes – which may be useful to others but will almost certainly not be enough to pass the exam (i.e. you’ll need to read around the topics too – the Azure documentation is generally very good).

Note that Microsoft Azure is a fast-moving landscape – these notes are based on studying the exam curriculum and may not be current – refer to the Azure documentation for the latest position.

Azure Networking

  • Virtual networks (VNets) are used to manage networking in Azure. Can only exist in one Azure region.
  • CIDR notation is used to describe networks.
  • Use different subnets to partition network – e.g. Internet-facing web servers from internal traffic; different environments.
  • Subnet has to be part of VNet range with no overlap.
  • All virtual machines (VMs) in a VNet can communicate (by default) but anything outside cannot talk (by default) – so VNet is default network boundary.
  • In ASM, every VM has an associated cloud service (with its own name @cloudapp.net). Without subnets the VMs can only communicate via a public IP. If multiple cloud services are on same VNet then VMs can communicate using private IP.
  • Endpoints are used to manage connections: internal (private) endpoint listening on a given port (e.g. for RDP on 3389); external (public) endpoint on defined port number – therefore go to a particular server, rather than just to the cloud service.
    • Public from anywhere on the Internet; private only within the cloud service/VNet.
  • Dynamic IP (DIP) is the private IP associated with a VM; only resolvable inside the VNet – external access needs a public IP. Can chose an IP address to use – and will be reserved.
  • Virtual IP (VIP) – assigned to a cloud service – static public IP for as long as at least one VM running inside the cloud service.
  • Instance Level Public IP (ILPIP) – for direct connection to Azure VM from Internet (not via the cloud service); public IP attached to a VM. In this configuration, whatever ports open on the VM are open to the Internet – effectively bypassing the security of the VNet.
  • Use a VNet-to-VNet VPN to create a tunnel between VNets in different regions. This extends VNets to appear as if they were one.
  • Site-to-site VPN to create tunnel between on-premises network and Azure VNet. Uses persistent hardware on-premises.
  • Point-to-site VPN to create tunnel from individual computers to an Azure VNet. Software-based.
  • Multi-site VPN is a combination of the other methods, combined.
  • Azure ExpressRoute avoids routing via ISP – effectively a dedicated link from customer datacentre to Azure region, bypassing ISP. High throughput, low-latency and no effect on Internet link).
    • ExpressRoute Providers provide point-to-point Ethernet of connect via a cloud exchange. BGP sessions with edge routers on customer site. 200Mbps/500Mbps/1Gbps/10Gbps.
    • Can use for Azure computing (IaaS); Azure public services (web apps, etc. – PaaS) or Office 365 (SaaS).
  • Secure network with Network Access Control Lists (ACLs), attached to a VIP – define what traffic will be allowed/denied to/from the VIP (i.e. the cloud service). Lower number rule has higher priority. First match is executed and rest are ignored.
    • If there is no ACL – all traffic is allowed (whatever endpoints are open will allow access); if there is one or more permit, deny all others; if there is one or more deny, allow all others; combination of permit and deny to define a specific IP range.
    • Network ACL affects Incoming traffic only.
  • Network Security Groups (NSGs) are attached to a VM or a subnet and act on both inbound and outbound traffic.
    • By default all inbound access is blocked inbound rules (allow inbound within VNet and from Azure LB; deny all other inbound – rules 65000/65001/65500).
    • Outbound defaults allow outbound within VNet outbound, Internet outbound (0.0.0.0/0) and deny all others – rules 65000/65001/65500.
    • Default rules can’t be edited but can be overridden with higher priority rules.
  • Can only use Network ACLs or NSGs – not both together.
  • VMs can have multiple NICs in different subnets – i.e. dual-homed machine.

Azure Virtual Machines

  • Azure Hypervisor is similar to Hyper-V (but not the same).
  • Different sizes of VMs are available.
  • VMs are isolated at network and execution level – Azure customers never get access to the hypervisor – only to the VM layer.
  • Use Windows Server 2008 onwards or Linux: OpenSUSE; SUSE Enterprise Linux; CentOS; Ubuntu 12.04; Oracle Enterprise Linux; CoreOS; OpenLogic; RHEL
  • Basic and Standard service tiers – different machine types available:
    • General Purpose: A0-A4 Basic; A5-A7 Standard; A8-A9 Network Optimised (10Gbps networking); A10-A11 Compute Intensive (high end CPUs)
    • D1-D4, D11-D14 with SSD temp storage.
    • DS1-DS4, DS11-14 with premium (SSD) storage.
    • G1-G5 (and GS) with local SSD and lots of RAM.
    • F and N too

  • Every Azure VM has temporary storage drive (D:) – lost when VM is moved/restarted.
  • VMs may be attached to data disks that persist across VM restarts/redeployments and are locally replicated in-region (and beyond if specified).
  • Can use gallery images or create custom images (to meet custom requirements, e.g. with certain software pre-installed).
  • OS disk always has caching, default Read/Write (data disk caching is optional, default none) – changes need a reboot.
  • Can create a bootable image from an OS disk (not data disk).
  • Can change caching on data disk without reboot.
  • OS disk max 127GB, data disk max 1TB.
  • Only charged for storage used (regardless of what is provisioned).
  • Can take VHDs from on-premises: (Windows Server 2008 R2 SP1 or later), sysprep then upload with Add-AzureVhd -Destination storageaccount/container/name.vhd -LocalFilePath localfile.vhd; for Linux install WALinuxAgent (different preparation for different distributions).
  • Tell cloud service to load balance an endpoint to split load between VMs. With ARM there is the option to define a separate Load Balancer.
  • Encryption at rest for data disks requires third party applications (encryption is in preview though…).
  • Availability set: 2 or more VMs distributed across fault domains and upgrade domains for SLA of 99.95% (no SLA for single VMs).
  • Auto-scaling based on thresholds (mix/max number of instances, CPU utilisation, queue length – between web and worker roles) or time schedule (also time to wait before adding/removing more instances – AKA cooldown period). Needs at least 2 VMs in an availability set.
  • Basic VMs have no load balancing or auto-scaling.

Azure Storage Service

  • Blob, table, or queue storage (plus file storage for legacy apps) encapsulated inside a storage account.
    • Two types: Standard/Premium – essentially HDD/SSD.
    • Up to 500TB per storage account – can create multiple accounts.
  • Data stored in multiple locations (minimum 3 copies).
    • LRS (Locally Redundant Storage) synchronously replicates 3 copies data in separate fault and update domains. Use for: low cost; high throughput (less replication); data sovereignty concerns re: transfer out of region. If region goes down, so do all copies.
    • ZRS (Zone Redundant Storage) also 3 copies but in at least 2 facilities (1 or 2 regions). Data durable in case of facility failure.
    • GRS (Globally Redundant Storage) – 6 copies (3 copies in primary region asynchronously replicated to 3 more copies in a secondary region). Data still safe in a secondary region but cannot be read (unless Azure flips primary and secondary in event of catastrophic failure).
    • RA-GRS (Read Access Geo Redundant Storage) – read from secondary copy. -secondary.cloud.core.windows.net domain name.
  • More copies and more bandwidth is more cost! Also:
    • GRS ingress max 10 Gibps (20 egress) but does not impact latency of transactions made to primary location.
    • LRS ingress max 20 Gibps (30 egress)
  • File storage – mounted by servers and accessed via API. Provides shared storage for applications using SMB 2.1. Use cases:
    • On-premises apps that rely on file shares migrated to Azure VMs or cloud services without app re-write.
    • Storing shared application settings (e.g. config files) or diagnostc data like logs, metrics and crash dumps.
    • Tools and utils for developing or administering Azure VMs or cloud services.
    • Create shares inside storage accounts – up to 5TB per share, 1TB per file. Unlimited total number of files and folders.
    • https://storageaccountname.file.core.windows.net/sharename/foldername/foldername/filename
  • Blob storage: Not a file system – an object store.
    • Create containers inside storage accounts with up to 500TB data per container
    • https://storageaccountname.blob.core.windows.net/containername/blobname
    • Block blobs, with block ID; uploaded and then committed – unless committed doesn’t become part of the blob: max 64MB per upload (blocks <=4MB), max 200GB per blob; Can upload in parallel, better for large blogs (generally) and for sequential streaming of data.
    • Page blobs – collection of 512byte pages. Max size set during creation and initialisation (up to 1TB). Write by offset and range – instantly committed. Overwrite single page or up to 4MB at once; Generally used for random read/write operations (e.g. disks in VMs). Page blobs can be created on premium storage for higher IOPs.
    • Access control is via 512bit keys (secret key – used in API calls to sign requests) – two keys so can maintain connectivity whilst regenerate another (i.e. during key rotation).
    • Can have full public read access for anonymous access to blobs in a container; public read access for blobs only (but not list the blobs in the container); no public read access (default – only signed requests allowed); shared access signature – signed URL for access including permissions, start time and expiry time.
    • Lease blob for atomic operations – lease for 15-60 seconds (or infinite). Acquire/renew/change/release (immediately)/break (at lease end).
    • Snapshots – used to create a read-only copy of a blob (multiple snapshots possible but cannot outlive the original blob – i.e. deleting blob deletes the snapshots); charges based on difference.
    • Copy blob to any container within the same storage account (e.g. between environments).
  • Table storage:
    • Store data for simple query – NoSQL key-value store – no locks, joins, validation.
    • http://storageaccountname.table.core.windows.net/tablename
    • Generally, use row key to retrieve data.
    • Can partition tables and generate a partition key.
    • Use shared access signatures for querying/adding/updating/deleting/upserting (insert if does not already exist, else update) table entries
  • Queue storage:
    • Store and access messages through HTTP/HTTPS calls.
    • Each queue entry up to 64KB in size.
    • Store messages up to 100TB.
    • Use for an asynchronous list for processing; messaging layer between applications (avoid handshaking – just add to or consume from the queue); or messaging between web and worker roles.
    • http://storageaccountname.queue.core.windows.net/queuename
    • Operations to put (add), get (which makes message invisible), peek (get first entry without making invisible), delete, clear (all), update (visibility timeout or contents) for messages.
  • Pricing based on storage (per GB/month); replication type (LRS/ZRS/GRS/RA-GRS); bandwidth (ingress is free; egress charged per GB); requests/transactions.

Web Apps

  • Web Apps are available in 5 tiers: free/shared/basic/standard/premium.
  • These tiers affect: the maximum number of web/mobile/API apps (10/100/unlimited/unlimited/unlimited), logic apps (10/10/10/20 per core/20 per core, integration options (dev/test up to basic; Standard connectors for Standard; Premium Connectors and BizTalk Services for premium), disk space (1GB/1GB/10GB/50GB/500GB), maximum instances (-/-/3/10/50), App Service environments (Premium only), SLA (Free/shared none; Basic 99.9; Standard and Premium 99.95%)
  • Resource Group and Web Hosting Plan are used to group websites and other resources in a single view; can also add databases and other resources; deleting a resource group will delete all of the resources in it.
  • Instance types:
    • Free F1.
    • Shared D1.
    • Basic B1-B3 1 core, 1.75GB RAM, 10GB storage x2 cores and RAM (2/3.5; 4/7) – VMs running web apps.
    • Standard S1-S3 same cores and RAM but more storage (50GB).
    • Premium P1-P4 same again but 500GB storage (P4 is 8 cores, 14GB RAM).
  • Other things to configure:
    • .NET Framework version.
    • PHP version (or off).
    • Java version (or off) – use web container version to chose between Tomcat and Jetty; enabling Java disables .NET, PHP and Python.
    • Python version (or off).
  • Scale web apps by moving up plans: Free-Shared-Basic-Standard – changes apply in seconds and affect all websites in web hosting plan. No real scaling for Free or Shared plans. Basic can change instance size and count. Standard can autoscale based on schedule or CPU – min/max instances (checked every 5 mins).
  • Scale database separately.
  • Deployment pipeline can be automated and can flip environments when move from staging to production (flips virtual IP). Can flip back if there are issues.
  • SSL certificates – can add own custom certs (2 options – server name indication with multiple SSL certs on a single VM; or IP SSL for older browsers but only one SSL cert for IP address).
  • Site extensions – no RDP access to the VM, so tools for website: Visual Studio Online for viewing code or phpMyAdmin.
  • Webjobs allow running programs or scripts on website (like cron in Linux or scheduled task in Windows) – one time, schedules or recurring.
  • Can use .cmd, .bat or .exe; .ps1, .sh., php, .py, .js
  • Monitoring web app via metrics in the portal.

Cloud Services

  • For more complex, multi-tier apps.
    • Web role with IIS
    • Worker role for back-end (synchronous, perpetual tasks – independent of user interaction; uses polling, listening or third party process patterns).
  • Upload code and Azure manages infrastructure (provisioning, load balancing, availability, monitoring, patch management, updates, hardware failures…)
  • 99.95% SLA (min 2 role machines)
  • Auto-scale based on CPU or queue.
  • Communicate via internal endpoints, Azure storage queues, Azure Service Bus (pub/sub model – service bus creates a topic, published by web role and worker role subscriber is notified).
  • Availability: fault domain (physical – power, network, etc.) – cannot control but can programmatically query to find out which domain a service is running in. In ASM, normally 0 or 1. ASM automatically distributes VMs across fault domains.
  • Upgrade domain (logical – services stopped one domain at a time) – default is 5, can be changed.
    If have web and worker roles, automatically placed in Availability set.
  • Azure Service Definition Schema (.csdef file) has definitions for cloud service (number of web/worker roles, communications, etc.), service endpoints, config for the service – changes required restart of services.
  • Azure Service Configuration Schema (.cscfg file) runtime components, number of VMs per web/worker role and size etc. – changes do not require service restart.
  • Deployment pipeline as for Web Apps.

Azure Active Directory

  • Identity and Access Management in the cloud – provided as a service.
  • Optionally integrate with on-premises AD.
  • Integrate with SaaS (e.g. Office 365).
  • Use cases: system to take care of authentication for application in the cloud; “same sign-on” for applications on-premises and cloud; federation to avoid concerns re: syncing passwords and avoid multiple logins to different apps (even with same sign-on) – provide single sign-on; SSO for 1000s of third-party applications. Effectively, if sync password then same sign-on, if no password sync then single sign-on.
  • Can also enable Multi-Factor Authentication (MFA) for Azure AD and therefore add MFA to third party apps.
  • Directory integration with Azure Active Directory Synchronization Tool (DirSync) or Azure AD Sync. Use Azure Active Directory Connect instead.
  • Can also use Forefront Identity Manager 2010 R2 (or Microsoft Identity Manager?) – originally was needed if sync multiple ADs.
  • Each directory gets a DNS name at .onmicrosoft.com. Also possible to use custom domains (verify domains in DNS).
  • Supports WS-Federation (SAML token format); OAuth 2.0; OpenID Connect; SAML 2.0.

Role-based access control

  • Role = collection of actions that can be performed on Azure resources.
  • Users for RBAC are from the associated Azure AD.
  • Roles can be assigned to external account users by invite.
  • Roles can be assigned to Azure AD security groups (recommended practice, rather than direct role assignment).
  • Roles can also be assigned for Resource Groups (resources inherit access from subscription-Resource Group-Resource).
  • Built-in roles: Owner (create and manage all types of resource); Reader (read all types of resource); Contributor (manage everything except access). Lots of other roles built on this construct – e.g. Virtual Network Contributor.

Azure SQL Database

  • Relational database service as a service (PaaS) – up to 500 GB per database.
  • Easy provisioning, automatic HA, load balancing, built-in management portal, scalability, use existing skills to deploy database, patching, etc. taken care of so less time to manage, easy sync with offline data.
  • It is not same as SQL Server on a VM though!
    • Unsupported features may have corresponding features in Azure; some are just not available.
  • Performance model with different tiers: Basic, then Standard S0-S3, Premium P1-P2, P4, P6 (formerly P3).
    • Measured in Database Thoughput Units (DTUs) – standardised model to help sizing (relative model [like ACU for VMs]).
    • Only committing to transactions per hour in Basic, per minute in Standard, per second in Premium.
  • Scaling Azure SQL: Federation is deprecated; Custom Sharding (create multiple database and use application logic to separate, e.g. based on customer ID); Elastic Scale (application doesn’t need to be so smart, endpoint is same but multiple applications).
  • Backups:
    • SQL database creates automatic backup for active database; at least 3 replicas at any one time – one primary replica and two or more secondaries (more if using GRS).
    • Can restore to point-in-time (self-service capability to restore from automated system – creates new database on same server – zero-cost/zero-admin – number of days depends on service tier – 7, 14, 35 days for basic/standard/premium), or geo-restore (restore from geo-redundant backup to any server in any region.
    • Automatically enabled for all tiers at no extra cost – helps when there is a region outage – estimated recovery time <12h RPO <1h).
  • Also standard geo-replication (protect app from regional outage – one secondary database in Microsoft-defined paired region; secondary is visible but can’t connect to it until failover occurs – discount for secondary DB as offline until failover – standard/premium only with ERT <30s RPO <5s) and active geo-replication (database redundancy within different regions – up to 4 readable secondary servers – asynchronous replication of committed transactions from one DB to another; for write-intensive applications – e.g. load balancing for read-only workloads – premium only with ERT <30s RPO <5s).
    • Regional disaster – Geo Restore, Standard or Active Geo-Replication.
    • Online application upgrade – Active Geo replication.
    • Online application relocation – Active Geo replication.
    • Read load balancing – Active Geo replication.
  • Security: only available via TCP 1433 – blocked by default – define firewall rules at server and database level to open up (i.e. to own IP address). Can define firewall rules programmatically with T-SQL, REST API and Azure PowerShell.
  • Data encrypted on wire – SSL required all the time
  • Data encrypted at rest – encryption with transparent data encryption – real-time I/O encryption/decryption for data and log files.
  • Only supports SQL Server authentication or Azure AD authentication – i.e. no Windows authentication.
  • First user created (master database principal) cannot be altered or dropped; can configure user-level permissions by logging on to the database and issuing SQL commands.
  • Pricing: DB size plus outbound data transfers (per database, per month) – per hour pricing, so drop DTUs at quiet time.

Azure Mobile Service

  • Cross-platform app development service (PaaS).
  • Mobile apps need to be cross-platform, with cloud storage, ID management, database integration and push notifications.
  • Azure Mobile Services provides mobile back-end as a service (MBaaS).
  • Easily connect to SaaS APIs – e.g. Facebook, Salesforce, etc.
  • Auto-scaling based on incoming customer load.
  • User authentication taken care of by the service.
  • Push notifications to millions in seconds.
  • Offline-ready apps with sync capability.

Azure Content Delivery Network (CDN)

  • Caching public objects from a storage account at point of presence (POP) for faster access close to users (and to scale when a lot of traffic hits).
  • Content served from local edge location. If content not there (first serve), it fetches information from the origin and caches locally.
  • Drastic reduction in traffic on original content (so faster access and more scalable!)
    Use a CDN for lower latency, higher throughput, improved performance!
  • POP locations separate to Azure regions – not full-fledged DCs.
  • CDN origin can be Azure Storage, Apps, Cloud Services or Media Services (including live streaming) – or a custom origin on any web server.
  • CDN Edge is a cache – not a permanent store.
  • Anycast protocol is used to route user to closest endpoint.
  • Create a CDN endpoint: http://cdnname.azureedge.net/
  • Change website code to point to the CDN. Route dynamic content to origin, static to CDN.
  • Can set a custom domain too (e.g. cdn.domain.com) – avoid browser warnings about content from other domains.
  • Can also enable HTTPS – need to upload the SSL certificate.
  • Default cache is 72 hours – cache control header can be used to control (any value >300s). Use to ensure not serving stale content.
  • Use CDN to cache images, scripts, CSS from Azure Cloud Service but have to provide using HTTP on port 80.
  • Pricing based on bandwidth (between edge and origin) and requests.

Azure Traffic Manager

  • DNS-based routing for infrastructure. Route to different regions, monitoring health of endpoints (HTTP checks) to assist with DR. Many routing policies.
  • Create a Traffic Manager endpoint and route to this via DNS.
  • Options include failover load balancing (re-route based on availability, with priority list – 100% of traffic to one endpoint – used for DR/BC rather than scaling); round robin load balancing (shared across various endpoints in rotation – but only to healthy endpoints cf. DNS RR); Weighted round robin load balancing (use weight to distribute traffic between endpoints); performance load balancing (based on latency times).
  • Different to traditional load balancer in that it is DNS-based – user request is direct to endpoint, not through load balancer. Also, note that traffic is direct to web servers – not to Edge locations as in CDN.
  • Pay per DNS request resolved (TTL will keep this down) and per health-check configured.

Azure Monitoring

  • Diagnostic tasks may include performance measurement, troubleshooting and debugging, capacity planning, traffic analysis, billing and auditing.
  • Monitor via portal; Visual Studio (plugins to parse logs, etc.) or third party tools.
  • Azure management services to manage alerts or view operational logs. Create alerts based on metrics and thresholds (and average to smooth out spikes) and send email to service admins and co-admins or to a specific address.
  • Operational logs are service requests – operation, timestamped, by whom.
  • Visual Studio 2013 has Azure SDK for managing Azure services. Some limitations: with remote debugging cannot have more than 25 role instances in a cloud service.
  • Azure Redis cache monitoring allows diagnostic data stored in storage account – enable desired chart from Redis cache blade to display the metric blade for that chart.
  • System Center 2012 R2 can also monitor, provision, configure, automate, protect and self-service Azure and on-premises.
  • Third party tools like New Relic and AppDynamics.
  • For websites there are application diagnostic logs and site diagnostic logs (3 types: web server logging; detailed error messages; failed request tracing) – access via Visual Studio, PowerShell or portal. Kudu dashboard at https://sitename.scm.azurewebsites.net.
  • View streaming log files (i.e. just see the end): Get-AzureWebsiteLog -Name "sitename" -Tail -Path http
  • View only the error logs: Get-AzureWebsiteLog -Name "sitename" -Tail -Message Error
  • Options include -ListPath (to list log paths) -Message <string> -Name <string> -Path (defaults to root) -Slot <string> -Tail (to stream instead of downloading entire log)
  • Can also turn on diagnostics on storage accounts.

Azure HD Insight

  • Microsoft Implementation of Hadoop – create clusters in minutes (Windows or Linux); pay per use (no need to leave running); use blob storage as storage layer and Excel to visualise the data.
  • Hadoop uses divide and conquer approach to solving big data problems (chunking): processes the data, then combines it again – using HDFS and MapReduce components.
  • Provision cluster, take large data set (e.g. search engine queries) on master node, distributed to processing nodes (Map). Reduce collects results and collates.
  • Hybrid Hadoop – e.g. for organisations that offer analytics services – burst to cloud…
  • Either site-to-site VPN on-premises to Azure, or ExpressRoute.
  • Supports Storm and HBase clusters natively – can install other software via custom script.
  • Connectors in WebApp (Standard and Premium) – connect to other services (e.g. Azure HDInsight).

High Performance Computing (HPC)

  • HPC not the same as big data:
    • Big data analytics is usually bounded by data volumes and so network IO.
    • HPC usually CPU-bounded.
  • HPC good for financial modelling, media encoding, video and image rendering, smaller compter-aided engineering models, etc.
  • HPC instances are A8/9 (network optimised – high-bandwidth RDMA network 32Gbps within cloud service as well as 10Gbps Ethernet to other services) and A10/11 (compute intensive).
  • Both 8/16 cores, 56/112GB RAM, 382GiB disk.
  • Microsoft HPC Pack 2012 R2 SP1 on Windows Server (on-premises, in Azure or hybrid) – Message Passing Interface (MPI) used (over RDMA network).

Azure Machine Learning

  • Predictive analysis in cloud – as a service, no VMs etc. to manage.
  • Take existing data, analyse by running predictive models and predict future outcomes/trends.
  • Deploy in minutes; drag and drop machine learning algorithms (built-in); use data in Azure; add custom scripts; Marketplace of vendors providing custom solutions.
  • Terminology:
    • Classification (group data).
    • Regression (predict a value).
    • Ranking (order items by criteria).
    • Clustering (take a set of data, e.g. by date range).
  • Get raw data (unstructured or losely structured) -> data cleaning -> build machine learning model -> predict results.

Azure Automation

  • Script and automate the application lifecycle; simplify cloud management; automate manual, long-running and frequently-repeated tasks (save time and increase reliability).
  • Works with Web Apps. Virtual Machines, Storage, SQL Server and other Azure services.
  • Automation account is a container for Azure Automation resources.
  • Create runbooks – set of tasks that perform an automated process – PowerShell workflow.
  • Scheduler to start run-books daily/hourly/at a defined point in time.
  • Pricing based on minutes/triggers:
    • Free = 500 minutes
    • Basic tier
    • Standard tier
  • Automation is an enabler for DevOps:
    • Dev team loves changes.
    • Ops Team loves stability.
    • Agile used for development between business-dev.
    • DevOps fills gap between dev and ops.
    • Infrastructure as code; configuration automation; automation testing.
  • Continuous integration – pipeline to delivery and deployment – cycle of integrating solution with various phases:
    • Delivery team check-in to Version Control, triggers Build and Unit Tests (with Feedback). When Build and Unit tests are clean, triggers Automated Acceptance tests (with feedback). When approval gained, move to User Acceptance Tests, and then on FInal Approval move to release.
  • Continuous Delivery – push-button deployment of any version of software to any environment, on demand – similar to CI but can feed business logic tests.
    • Need automated testing to achieve CD.
  • Continuous Deployment – natural extension to CD; every check-in ends up in a production release.
  • Chef for Configuration Automation: Configuration Management between environments: Build, Test, Release, Deploy (and automate CI/CD). Manage Windows and Linux VMs, integration via Azure Portal. Chef and DSC can be used together to manage infrastructure.
  • Puppet – integrated with Azure and VS 2013 for easy deployment of infrastructure across physical and virtual machines. Can deploy pre-configured Puppet image to create a VM.
  • Deploy Custom Script with VM configuration – run when VM is launched (one of the available config extensions).
  • VM agent is used to install and manage extensions that help interact with the VM (Chef, Puppet, Custom Script).

Azure Media Services

  • Developing video on demand is challenging: cost/managing content/encoding/distribution across multiple devices/streaming experience/DRM content protection/providing high quality video for any device any time anywhere.
  • Ingest data, encode, format conversion, content protection (DRM policies), on-demand streaming, live streaming, analytics, advertising.
  • Need media service account and associated storage account.
  • Media Player is web video player service backed by Azure Media Service: one player for all popular devices – no need to develop device-specific player; plays format for that device; easy intergtaion with web and apps; standard player controls.
  • Data caching via Azure CDN.
  • Steps:
    • In management portal, create new Media Service with name, storage account and region.
    • Start the Media Service.
    • Scale up streaming units (1 unit=200Mbps).
    • Upload a video file (from local or from Azure storage) – will be stored in storage account without encryption.
    • Publish the file.
    • Configure the encoding options, then video is uploaded into portal (can encode multiple times for different formats with different names).
    • View the media content (copy link into browser).

Azure Resource Manager

  • With ASM even a VM has a cloud service.
  • ARM is pure IaaS, not necessarily cloud service.
  • Deploy, manage and monitor services as a group; deploy repeatedly throughout the application life cycle; use declarative templates to define deployment; can have dependencies between resources; apply RBAC; organise logically by tagging.
  • ASM tightly couples to cloud service – VM in subnet, in VNet, in cloud service, in region, with VIP for DNS and public IP.
  • ARM is more loosely coupled – can have multiple VIPs, NICs, etc. All in a RG (which can span regions). Attached via reference.
 ASM XML  ARM JSON
VM deployment  Cloud service as container  Does not require a cloud service
Availability set Define VMs under same availability set Availability set is a resource exposed by the Microsoft.Compute provider – VMs that need HA must be included in availability set
Fault domain  Maximum 2 fault domains  Maximum 3 fault domains
Load balancing Cloud service provides an implicit load balancer for the VMs  The load balancer is a resource exposed by the Microsoft.Network provider
Virtual IP address Default static VIP as long as one VM running in the cloud service Public IP is a resource exposed by Microsoft.Network – can be static (reserved) or dynamic
Reserved IP address Reserve an IP address in Azure and associate with a cloud service Public IP can be created as static and assigned to a load balancer
  • Choose deployment mode when provisioning resources. Limited inter-operability so choose the right model.
  • Deploy using
    • Portal
    • PowerShell: Switch-AzureMode -Name AzureResourceManager
    • ARM REST API
    • Azure CLI: azure config mode arm
  • Resource Manager template – JSON document – deploys and provisions all of the related resources in a single, co-ordinated operation.
  • Tags are key-value pairs of metadata: applied to individual ARM resources or ARM RGs – up to 15 tags per Resource or RG
    RBAC – Owner, Reader or Contributor.

Azure Messaging Solutions

  • Service Bus: multi-tenant cloud service – each user creates a namespace to work within.
    • Queues – one-way communication, asynchronous queuing with guarantee of message delivery order (worker has to keep polling).
    • Topics – let each receiving application create a subscription by defining a filter (avoid polling – get notification instead) – pub-sub model. Read with RecievAndDelete or PeekLock; can have multiple subscribers.
    • Relays – synchronous 2 way communications between applications – won’t help with buffering.
  • Event hubs – highly scalable ingestion system that can process millions of events per second (e.g. for IoT).
  • Can also queue via storage – more options with service bus but more scalable with storage.

Azure Backup

  • Backup service targeted at replacing tape backup.
  • Can work with on-premises workloads or Azure workloads.
  • On-premises backup – pick region and create a vault; download vault credential files; download and install Azure backup agent; can seed through Azure Import/Export Service; select backup policy (start time of backup (retention policies (weekly/monthly/yearly)) – backups are incremental.
  • Azure VM Backup – install agent if not already installed, register VMs with Azure Backup Service (installs backup agent in extensions); select backup policy.
  • Azure backup is to backup data on VM. Priced per protected instance and storage consumed (price for protected instance goes up at 50GB, then 500GB, then each additional 500GB.

Azure Site Recovery

  • Orchestrates failover and recovery of a VM.
  • On-premises machine replicated to vault in Azure, or to another datacentre – not Azure to Azure.
  • Protect AD and DNS, SQL Server, SharePoint, Dynamics AX, RDS, Exchange, SAP.
  • Can also perform a test failover, starting resources in Azure but not routing the traffic.
  • Use to protect VMware ESX or Hyper-V VMs or physical servers and can be used to migrate to Azure

Business continuity (BC) and disaster recovery (DR)

  • Scenarios: recover from local failures; loss of a region; on-premises to Azure
  • For Azure failures:
    • HA in PaaS (per region), just make sure web and worker roles 2 or more roles each – then will automatically be spread across fault domains.
    • For region failure need to plan across regions – more elaborate (make sure code and config is available in a second region).
  • HA in IaaS needs management of VMs in availability sets (need to define define manually).
  • At region level, also think about load balancing (VIP), storage (LRS, ZRS, GRS of RA-GRS), Azure SQL replication.
  • Recover from loss of region:
    • Redeploy on disaster (cold DR) – replicate data ready to run (not high RTO/RPO)
    • Warm spare (active/passive) – infrastructure in DR region but not fully available (e.g. SQL replication with secondary copy not accessed, not routing traffic to passive).
    • Hot spare (active/active) – two regions at the same time (e.g. SQL on IaaS and replicating itself).
  • Cross regional strategies for DR:
    • VNet – export settings, import in secondary region.
    • Cloud Services – create a separate cloud service in target region; publish to secondary region if primary files; use Traffic Manager to route traffic.
    • VM – use blob copy API to duplicate VM disks; geo-replicated VM images.
    • Storage – use GRS or RA-GRS (replicated in minutes, so tight RPOs cannot rely on this – need to write own algorithm).
    • Azure SQL:
      • Geo-restore (1 hour RPO/<12 hours RTO).
      • Standard geo-replication (5 secs RPO/30 mins RTO) – no access to secondary.
      • Active geo-replication (5 secs RPO/30 mins RTO) – read access to secondary.
      • Manually export to Azure Storage (blob) with Azure SQL database import/export service.

Securing Azure Resources

  • Cloud security model is shared security model:
    • Users are responsible for securing applications.
    • Cloud Service Provider (CSP) is responsible for providing controls; users for using them!
    • CSP is responsible for infrastructure security.
  • VNet/VM security: use endpoints (ACL for endpoints, NSGs at VM or VNet level).
  • Storage: use shared access signatures.
  • Role-based access control.
  • Encryption.

Microsoft’s UK datacentres: what you need to know

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This morning, the UK woke up to an announcement from Microsoft that the UK datacentres for Azure and Office 365 are generally available, making Microsoft the first global provider to deliver a complete cloud (Iaas, PaaS and SaaS) from UK data centres.

That means:

  • Two new Azure regions in the UK:
    • UK West (Cardiff)
    • UK South (London)
  • Office 365 services from UK datacentres in Durham and London.

Dynamics CRM online will be offered from the UK in the first half of 2017.

That Azure location information was taken from the Azure regions page on the Microsoft website (although my sources tell me that “Cardiff” is really “Newport” – close enough as to make no difference anyway, and London is probably “near London” too).  The Office location information was taken from the Office 365 Interactive Data Maps.

Now, UK customers already using Azure or Office 365 will be asking “will my data be moved to a UK datacentre?”. There’s no official announcement from Microsoft (not that I’ve seen) but my (unofficial) answer is “no”. At least not automatically.

For Azure, it’s good practice to design across multiple regions. There are also implications around geo-replication (which regions are paired with which for business continuity and disaster recovery purposes). Moving resources from one region to another is possible but is also a project that would need to be undertaken by a customer (possibly working with a partner) as a programme of planned resource moves.

For Office 365, it’s worth reading the TechNet advice on Moving core data to new Office 365 datacenter regions. At the time of writing it hasn’t been updated to reflect UK datacentres (it was last updated 28 July 2016) but it currently says:

“Existing customers that have their core customer data stored in an already existing datacenter region are not impacted by the launch of a new datacenter region”

[…]

“The data residency option, and the availability to move customer data into the new region, is not a default for every new region we launch. As we expand into new regions in the future, we’ll evaluate the availability and the conditions of data moves on a region by region basis.”

“New customers or Office 365 tenants created after the availability of the new datacenter region will have their core customer data stored at rest in the new datacenter region automatically.”

The page goes on to state that, assuming the data residency option is made available for the UK (remember, nothing has been announced yet)

“Customers will need to request to have their data moved within a set enrollment window.”

and that:

“Data moves can take up to 24 months after the request period to complete”

There’s also a footnote on the UK interactive data map to say:

“Customers who signed up and selected the United Kingdom for their Office 365 services before September 2, 2016 will have their customer data located in the EMEA datacenter locations.”

So, in short, Office 365 (SaaS) data stays exactly where it is, unless you sign up for a new tenant, or wait for further announcements from Microsoft. Azure (IaaS and PaaS) workloads can be moved to the new regions whenever you are ready.

 

Microsoft Azure URLs

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been doing a lot of Azure reading recently and it struck me that there are many different URLs in use that would be useful to record somewhere.

John Savill (@NTFAQGuy) has noted the main ones in his Windows IT Pro post on Azure URLs to whitelist but I’ll expand on them here to highlight the purpose of each one (and to add some extras):

*.windowsazure.com

https://manage.windowsazure.com/ is still used for the legacy (classic) Azure Service Manager (ASM) portal. http://windowsazure.com/ redirects to http://azure.microsoft.com/

*.azure.com

https://portal.azure.com/ is used for the Azure Resource Manager (ARM) portal. http://azure.com/ redirects to http://azure.microsoft.com/

*.*.core.windows.net

This URL pattern is used for access to Azure Storage:

  • File access: https://storageaccountname.file.core.windows.net/sharename/foldername/foldername/filename
  • Containers in blob storage: https://storageaccountname.blob.core.windows.net/containername/blobname
  • Table storage: http://storageaccountname.table.core.windows.net/tablename
  • Queue storage: http://storageaccountname.queue.core.windows.net/queuename

*.cloudapp.net

Domain name used for cloud services.

*.azurewebsites.net

Domain name used for Azure App Service websites. Each site also has Kudu at https://sitename.scm.azurewebsites.net.

*.database.windows.net

Domain name used for Azure SQL database.

*.trafficmanager.net

Domain name used for Azure Traffic Manager.

*.azureedge.net

Content Delivery Network (CDN) endpoints are available via http://cdnname.azureedge.net/.

*.streaming.mediaservices.windows.net

Domain name used for steaming media services (https://mediaservicename.streaming.mediaservices.windows.net/identifier/filename)

*.onmicrosoft.com

Domain name used for the Microsoft Online Services tenant (tenantname.onmicrosoft.com) – shared between multiple online services, including Azure but also Office 365, etc.

[Edited 12/9/16 – updated to include streaming media services and tenant URL]

Not all software consumed remotely is a cloud service

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Helping a customer to move away from physical datacentres and into the cloud has been an exciting project to work on but my scope was purely the Microsoft workstream: migrating to Office 365 and a virtual datacentre in Azure. There’s much more to be done to move towards the consumption of software as a service (SaaS) in a disaggregated model – and many more providers to consider.

What’s become evident to me in recent weeks is that lots of software is still consumed in a traditional manner but as a hosted service. Take for example a financial services organisation who was ready to allow my customer access to their “private cloud” over a VPN from the virtual datacentre in Azure but then we hit a road block for routing the traffic. The Azure virtual datacentre is an extension of the customer’s network – using private IP addresses – but the service provider wanted to work with public IPs, which led to some extra routers being reployed (and some NATting of addresses somewhere along the way). Then along came another provider – with human resources applications accessed over unsecure HTTP (!). Not surprisingly, access across the Internet was not allowed and again we were relying on site-to-site VPNs to create a tunnel but the private IPs on our side were something the provider couldn’t cope with. More network wizardry was required.

I’m sure there’s a more elegant way to deal with this but my point is this: not all software consumed remotely is a cloud service. It may be licenced per user on a subscription model but if I can’t easily connect to the service from a client application (which will often be a browser) then it’s not really SaaS. And don’t get me started on the abuse of the term “private cloud”.

There’s a diagram I often use when talking to customers about different types of cloud deployments. it’s been around for years (and it’s not mine) but it’s based on the old NIST definitions.

Cloud computing delivery models

One customer highlighted to me recently that there are probably some extra columns between on-premises and IaaS for hosted and co-lo services but neither of these are “cloud”. They are old IT – and not really much more than a different sort of “on-premises”.

Critically, the NIST description of SaaS reads:

“The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.”

The sooner that hosted services are offered in a multi-tenant model that facilitates consumption on demand and broad network access the better. Until then, we’ll be stuck in a world of site-to-site VPNs and NATted IP addresses…

Improving application performance from Azure with some network routing changes

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few months, I’ve been working with a UK Government customer to move them from a legacy managed services contract with a systems integrator to a disaggregated solution built around SaaS services and a virtual datacentre in Azure.  I’d like to write a blog post on that but will have to be careful about confidentiality and it’s probably better that I wait until (hopefully) a risual case study is created.

One of the challenges we came across in recent weeks was application performance to a third-party-hosted solution that is accessed via a site-to-site VPN from the virtual datacentre in Azure.

My understanding is that outside access to Microsoft services hits a local point of presence (using geographically-localised DNS entries) and then is routed across the Microsoft global network to the appropriate datacentre.

The third-party application in Bedford (UK) and the virtual datacentre is in West Europe (Netherlands) so the data flows should have just been in Europe.  Even so, a traceroute from the third-party provider’s routers to our VPN endpoint suggested several long (~140ms) hops once traffic hit the Microsoft network. These long hops were adding significant latency and reducing application performance.

I logged a call under the customer’s Azure support contract and after several days of looking into the issue, then identifying a resolution, Microsoft came back and said words to the effect of “it should be fixed now – can you try again?”.  Sure enough, ping times (not the most accurate performance test it should be said) were significantly reduced and a traceroute showed that the last few hops on the route were now down to a few milliseconds (and some changes in the route). And overnight reports that had been taking significantly longer than previously came down to a fraction of the time – a massive improvement in application performance.

I asked Microsoft what had been done and they told me that the upstream provider was an Asian telco (Singtel) and that Microsoft didn’t have direct peering with them in Europe – only in Los Angeles and San Francisco, as well as in Asia.

The Microsoft global network defaults to sending peer routes learned in one location to the rest of the network.  Since the preference of the Singtel routes on the West Coast of the USA was higher than the preference of the Singtel routes learned in Europe, the Microsoft network preferred to carry the traffic to the West Coast of the US.  Because most of Singtel’s customers are based in Asia, it generally makes sense to carry traffic in that direction.

The resolution was to reconfigure the network to stop sending the Singtel routes learned in North America to Europe and to use one of Singtel’s local transit providers in Europe to reach them.

So, if you’re experiencing poor application performance when integrating with services in Azure, the route taken by the network traffic might just be something to consider. Getting changes made in the Microsoft network may not be so easy – but it’s worth a try if something genuinely is awry.

Short takes: calculating file transfer times; Internet breakout from cloud datacentres; and creating a VPN with a Synology NAS

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another collection of “not-quite-whole-blog-posts”…

File transfer time calculations

There are many bandwidth/file transfer time calculators out there on the ‘net but I found this one particularly easy to work with when trying to assess the likely time to sync some data recently…

Internet breakout from IaaS

Anyone thinking of using an Azure IaaS environment for Internet breakout (actually not such a bad idea if you have no on-site presence, though be ready to pay for egress data) just be aware that because the IP address is in Holland (or Ireland, or wherever) location-aware websites will present themselves accordingly.

One of my customers was recently caught out when Google defaulted to Dutch after they moved their client Internet traffic over to Azure in the West Europe region… just one to remember to flag up in design discussions.

Creating a VPN with a Synology NAS

I’ve been getting increasingly worried about the data I have on a plethora of USB hard disks of varying capacities and wanted to put it in one place, then sync/archive as appropriate to the cloud. To try and overcome this, I bought a NAS (and there are only really two vendors to consider – QNAP or Synology).  The nice thing is that my Synology DS916+ NAS can also operate many of the network services I currently run on my Raspberry Pi and a few I’ve never got around to setting up – like a VPN endpoint for access to my home network.

So, last night, I finally set up a VPN, following Scott Hanselman’s (@shanselman) article on Setting up a VPN and Remote Desktop back into your home. Scott’s article includes client advice for iPhone and Windows 8.1 (which also worked for me on Windows 10) and the whole process only took a few minutes.

The only point where I needed to differ from Scott’s article was the router configuration (the article is based on a Linksys router and I have a PlusNet Hub One, which I believe is a rebadged BT Home Hub). L2TP is not a pre-defined application to allow access, so I needed to create a new application (I called it L2TP) with UDP ports 500, 1701 and 4500 before I could allow access to my NAS on these ports.

Creating an L2TP application in the PlusNet Hub One router firewall

Port forwarding to L2TP in the PlusNet Hub One router firewall

Scripting Azure VM build tasks: static IP addresses, BGInfo and anti-malware extensions

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Following on from yesterday’s blog post with a pile of PowerShell to build a multiple-NIC VM in Azure, here are some more snippets of PowerShell to carry out a few build-related activities.

Setting a static IP address on a NIC

$RGName = Read-Host "Resource Group"
$VNICName = Read-Host "vNIC Name"
$VNIC=Get-AzureRmNetworkInterface -Name $VNICName -ResourceGroupName $RGName
$VNIC.IpConfigurations[0].PrivateIpAllocationMethod = "Static"
Set-AzureRmNetworkInterface -NetworkInterface $VNIC

Installing BGInfo

$RGName = Read-Host "Resource Group"
$VMName = Read-Host "Virtual Machine Name"
$Location = Read-Host "Region/Location"
Set-AzureRmVMExtension -ExtensionName BGInfo -Publisher Microsoft.Compute -Version 2.1 -ExtensionType BGInfo -Location $Location -ResourceGroupName $RGName -VMName $VMName

Installing Microsoft Antimalware

This one is a little more difficult – the script is a development of Mitesh Chauhan’s work entitled Installing Microsoft Anti Virus Extension to Azure Resource Manager VM using Set-AzureRmVMExtension

It’s worth reading Mitesh’s post for more background on the Microsoft Anti Virus Extension (IaaS Antimalware) and also taking a look at the Security Health in the Azure Portal (currently in preview), which will highlight VMs that have no protection (amongst other things).

Mitesh’s script uses a simple settings string, or for more complex configuration, it reads from a file. I tried to use a more complex setting and it just resulted in PowerShell errors, suggesting this wasn’t proper JSON (it isn’t):

$AntiMalwareSettings = @{
"AntimalwareEnabled" = $true;
"RealtimeProtectionEnabled" = $true;
"ScheduledScanSettings" = @{
"isEnabled" = $true;
"day" = 1;
"time" = 180;
"scanType" = "Full"
};
"Exclusions" = @{
"Extensions" = ".mdf;.ldf;.ndf;.bak;.trn;";
"Paths" = "D:\\Logs;E:\\Databases;C:\\Program Files\\Microsoft SQL Server\\MSSQL\\FTDATA";
"Processes" = "SQLServr.exe;ReportingServicesService.exe;MSMDSrv.exe"
}
}

Set-AzureRmVMExtension : Error reading JObject from JsonReader. Current JsonReader item is not an object: Null. Path”, line 1, position 4.

If I use the JSON form it’s no better:

$AntiMalwareSettings = {
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": true,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": ".mdf;.ldf;.ndf;.bak;.trn",
"Paths": "D:\\Logs;E:\\Databases;C:\\Program Files\\Microsoft SQL Server\\MSSQL\\FTDATA",
"Processes": "SQLServr.exe;ReportingServicesService.exe;MSMDSrv.exe"
}
}

Set-AzureRmVMExtension : Unexpected character encountered while parsing value: S. Path ”, line 0, position 0.

So the actual script I used is below:

# Install Microsoft AntiMalware client on an ARM based Azure VM
# Check note at the end to be able to open up the SCEP antimalware console on the server if there are problems.
# Author – Mitesh Chauhan – miteshc.wordpress.com (updated by Mark Wilson - markwilson.co.uk)
# For Powershell 1.0.1 and above
# See https://miteshc.wordpress.com/2016/02/18/msav-extension-on-azurearm-vm/

# Log in with credentials for subscription
# Login-AzureRmAccount

# Select your subscription if required (or default will be used)
# Select-AzureRmSubscription -SubscriptionId "Your Sub ID here"

$RGName = Read-Host "Resource Group"
$VMName = Read-Host "Virtual Machine Name"
$Location = Read-Host "Region/Location"

# Use this (-SettingString) for simple setup
# $AntiMalwareSettings = ‘{ "AntimalwareEnabled": true,"RealtimeProtectionEnabled": true}’;

# Use this (-SettingString) to configure from JSON file
$AntiMalwareSettings = Get-Content ‘.\MSAVConfig.json’ -Raw

$allVersions= (Get-AzureRmVMExtensionImage -Location $location -PublisherName "Microsoft.Azure.Security" -Type "IaaSAntimalware").Version
$typeHandlerVer = $allVersions[($allVersions.count)–1]
$typeHandlerVerMjandMn = $typeHandlerVer.split(".")
$typeHandlerVerMjandMn = $typeHandlerVerMjandMn[0] + "." + $typeHandlerVerMjandMn[1]

Write-Host "Installing Microsoft AntiMalware version" $typeHandlerVerMjandMn "to" $vmName "in" $RGName "("$location ")"
Write-Host "Configuration:"
$AntiMalwareSettings

# Specify for -SettingString parameter here which option you want, simple $settingsstring or $MSAVConfigfile to sue json file.
Set-AzureRmVMExtension -ResourceGroupName $RGName -VMName $vmName -Name "IaaSAntimalware" -Publisher "Microsoft.Azure.Security" -ExtensionType "IaaSAntimalware" -TypeHandlerVersion $typeHandlerVerMjandMn -SettingString $AntiMalwareSettings -Location $location

# To remove the AntiMalware extension
# Remove-AzureRmVMExtension -ResourceGroupName $resourceGroupName -VMName $vmName -Name "IaaSAntimalware"

# If you have error saying Admin has restricted this app, Navigate to “C:\Program Files\Microsoft Security Client”
# Run "C:\Program Files\Microsoft Security Client\ConfigSecurityPolicy.exe cleanuppolicy.xml"
# Or simply drag the cleanuppolicy.xml file above onto the ConfigSecurityPolicy.exe to sort it and you should be in.

The MSAVconfig.json file contains the JSON version of the Anti-Malware settings:

{
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": true,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": ".mdf;.ldf;.ndf;.bak;.trn",
"Paths": "D:\\Logs;E:\\Databases;C:\\Program Files\\Microsoft SQL Server\\MSSQL\\FTDATA",
"Processes": "SQLServr.exe;ReportingServicesService.exe;MSMDSrv.exe"
}
}

Building a multiple NIC VM in Azure

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently found myself in the situation where I wanted to build a virtual machine in Microsoft Azure (Resource Manager) with multiple network interface cards (vNICs). This isn’t available from the portal, but it is possible from the command line.

My colleague Leo D’Arcy pointed me to Samir Farhat’s blog post on how to create a multiple NIC Azure virtual machine (ARM). Samir has posted his script on the TechNet Gallery but I made a few tweaks in my version:

#Variables
$VMName = Read-Host "Virtual Machine Name"
$RGName = Read-Host "Resource Group where to deploy the VM"
$Region = Read-Host "Region/Location"
$SAName = Read-Host "Storage Account Name"
$VMSize = Read-Host "Virtual Machine Size"
$AvailabilitySet = Read-Host "Availability Set ID (use Get-AzureRMAvailabilitySet to find this)"
$VNETName = Read-Host "Virtual Network Name"
$Subnet01Name = Read-Host "Subnet 01 Name"
$Subnet02Name = Read-Host "Subnet 02 Name"
$cred=Get-Credential -Message "Name and password for the local Administrator account"
 
# Getting the Network
$VNET = Get-AzureRMvirtualNetwork | where {$_.Name -eq $VNETName}
$SUBNET01 = Get-AzureRmVirtualNetworkSubnetConfig -Name $Subnet01Name -VirtualNetwork $VNET
$SUBNET02 = Get-AzureRmVirtualNetworkSubnetConfig -Name $Subnet02Name -VirtualNetwork $VNET
 
# Create the NICs
$NIC01Name = $VMName+'-NIC-01'
$NIC02Name = $VMName+'-NIC-02'
Write-Host "Creating" $NIC01Name
$VNIC01 = New-AzureRmNetworkInterface -Name $NIC01Name -ResourceGroupName $RGName -Location $Region -SubnetId $SUBNET01.Id
Write-Host "Creating" $NIC02Name
$VNIC02 = New-AzureRmNetworkInterface -Name $NIC02Name -ResourceGroupName $RGName -Location $Region -SubnetId $SUBNET02.Id
 
# Create the VM config
Write-Host "Creating the VM Configuration"
$VM = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize -AvailabilitySetId $AvailabilitySet
$pubName="MicrosoftWindowsServer"
$offerName="WindowsServer"
$skuName="2012-R2-Datacenter"
Write-Host " - Setting the operating system"
$VM = Set-AzureRmVMOperatingSystem -VM $vm -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
Write-Host " - Setting the source image"
$VM = Set-AzureRmVMSourceImage -VM $vm -PublisherName $pubName -Offer $offerName -Skus $skuName -Version "latest"
#Adding the VNICs to the config, you should always choose a Primary NIC
Write-Host " - Adding vNIC 1"
$VM = Add-AzureRmVMNetworkInterface -VM $VM -Id $VNIC01.Id -Primary
Write-Host " - Adding vNIC 2"
$VM = Add-AzureRmVMNetworkInterface -VM $VM -Id $VNIC02.Id
 
# Specify the OS disk name and create the VM
$DiskName=$VMName+'-OSDisk'
Write-Host " - Getting the storage account details"
$SA = Get-AzureRmStorageAccount | where { $_.StorageAccountName -eq $SAName}
$OSDiskUri = $SA.PrimaryEndpoints.Blob.ToString() + "vhds/" + $vmName+"-OSDisk.vhd"
Write-Host " - Setting up the OS disk"
$VM = Set-AzureRmVMOSDisk -VM $VM -Name $DiskName -VhdUri $osDiskUri -CreateOption fromImage
Write-Host "Creating the virtual machine"
New-AzureRmVM -ResourceGroupName $RGName -Location $Region -VM $VM

Upgraded Azure support for Enterprise Agreement customers

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently found myself in a situation where I tried to log a support request on my customer’s Microsoft Azure subscription, only to find that they didn’t have any eligible support agreements in place.

You'll need to buy a support plan before you can submit a technical support request

That seemed strange, as from 1 May 2016, Microsoft is offering a 12-month support upgrade to all customers that have or intend to buy Microsoft Azure services on an Enterprise Agreement (EA), except those customers with a Premier support contract.

Digging a little deeper, I found that:

“Microsoft will begin upgrade for existing Azure customers on Enterprise Agreement on May 1, 2016, and plans to complete the upgrades by September 30, 2016. New customers will be upgraded within 30 days of account activation. Customers will be notified by email upon being upgraded. For more information, please talk with your account manager or contact EA Azure Support through the Enterprise Portal”

But, the Enterprise Agreement Support Offer page that contains this information is subtitled: “to activate, contact your Microsoft account team”, so I contacted my customer’s account team.  Initially, they said that the customer needed to contact their Microsoft Licensing Solution Provider (LSP), who were equally confused, but I pushed a little harder and the account team investigated further, before arranging the necessary support.

So, if you’re an EA customer and you can’t wait until September to get an upgrade to your Azure support agreements, it may just be worth a chat with your Microsoft account team.

Short takes: deleting bit.ly Bitlinks; backing up and restoring Sticky Notes; accessing cmdlets after installing Azure PowerShell

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another collection of short notes to add to my digital memory…

Deleting bit.ly links

Every now and again, I spot some spam links in my Twitter feed – usually prefixed [delicious]. That suggests to me that there is an issue in Delicious or in Twitterfeed (the increasingly unreliable service I use to read certain RSS feeds and tweet on my behalf) and, despite password resets (passwords are so insecure) it still happens.

A few days ago I spotted some of these spam links still in my bit.ly links (the link shortener behind my mwil.it links, who also own Twitterfeed) and I wanted to permanently remove them.

Unfortunately, according to the “how do I delete a Bitlink” bit.ly knowledge base article – you can’t.

Where does Windows store Sticky Notes?

Last Friday (the 13th) I wrote about saving my work before my PC was rebuilt

One thing I forgot about was the plethora of Sticky Notes on my desktop so, today, I was searching for advice on where to find them (in my backup) so I could restore.

It turns out that Sticky Notes are stored in user profiles, under %appdata%\Microsoft\Sticky Notes, in a file called StickyNotes.snt. Be aware though, that the folder is not created until the Sticky Notes application has been run at least once. Restoring my old notes was as easy as:

  1. Run the Sticky Notes desktop application in Windows.
  2. Close Sticky Notes.
  3. Overwrite the StickyNotes.snt file with a previous copy.
  4. Re-open Sticky Notes.

Azure PowerShell installation requires a restart (or explicit loading of modules)

This week has involved a fair amount of restoring tools/settings to a rebuilt PC (did I mention that mine died in a heap last Friday? If only the hardware and software were supplied by the same vendor – oh they are!). After installing the Azure PowerShell package from the SCCM Software Center, I found that cmdlets returned errors like:

Get-AzureRmResource : The term ‘Get-AzureRmResource’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

After some RTFMing, I found this:

This can be corrected by restarting the machine or importing the cmdlets from C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\ as following (where XXXX is the version of PowerShell installed[)]: import-module "C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\azure.psd1" import-module "C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\expressroute\expressroute.psd1"