TechEd North America 2011 sessions


With the end of TechEd NA 2011, so ends a week of interesting sessions. Here’s a quick overview of recorded Exchange-related sessions for your enjoyment:

Exchange 2010 Mailbox Role Calculator 16.5


The Microsoft Exchange Team published an updated of the Exchange Mailbox Role Calculator, bringing it to version 16.5.

Additions since version 14.4:

  • Added virtualization support;
  • Added database symmetrical distribution with resilience support, cross-site fail-over option;
  • Added exportable scripts to support preparation of storage and creation of database copies;
  • Added server failure simulations.
  • Added throughput requirements information.

Bug fixes since 14.4:

  • Fixed an issue with diskpart.ps1 that prevented the script from executing;
  • Fixed secondary datacenter disk calculations to use the correct formula for calculating the formatted disk capacity for the transaction log disks;
  • Fixed “—“ rounding logic error that could occur in the results tables of the Activation worksheet for site resilient scenarios;
  • Fixed a scenario where the calculator would report there were too many copies in the secondary datacenter validation checks.

You can download the calculator here or consult the changeblog here. Related announcement by Exchange team here. Usage instructions can be found here.

Exchange 2010 SP2 features, MCM:Exchange 2010 exam-only


The first day of TechEd NA 2011 brought us much exciting and some less exciting news on the Exchange frontier.

First, the announcement of changes in Exchange 2010 Service Pack 2. Besides some 500 bug fixes, SP2 contains the following new features:

  • Address Book Policies (also known as GAL segmentation). ABPs are meant to segmentize the address book, giving users a certain view of the address book like Address List Segregation did for Exchange 2003/2007. ABPs were already announced back in January. I wonder how this affects for instance MailTips, as MailTips might report on organization-wide figures (sending mail to X users) while the end user may only see a small fragment of the population. Also, be advised that clients bypassing the CAS server for directory lookups, e.g. LDAP queries, don’t benefit from ABPs. Think Outlook for Mac but also multifunctionals, fax solutions etc.;
  • OWA mini. This will be a lightweight browser like OMA in the past, meant for simple browsers;
  • Hybrid Configuration. This wizard is to make the configuration of an on-premises Exchange and Office 365/Exchange Online more simple, reducing the steps required from 49 to 6;
  • OWA Cross-Site redirection. This will allow clients to be silently redirected to the proper site if they log on to a CAS server located in a site different than the site where their mailbox is hosted and externalURL has been specified there. This greatly increases the single sign-on experience.

Be advised that Exchange Server 2010 Service Pack 2 will require schema changes to support the new features. SP2 is scheduled for the 2nd half of 2011.

Second, starting July 2011, Microsoft announced the exam-only Microsoft Certified Master: Exchange Server 2010 certification. This is for IT Professionals with 5 years of experience who think they can do the exams without the intensive 3 weeks training. Microsoft already did the same thing to the MCM: SQL Server 2008 program last year. The Exchange MCM exam is two-fold:

  1. MCM: Exchange Server 2010, Knowledge Exam. This exam will be offered by Prometric at select testing centers worldwide;
  2. MCM: Exchange Server 2010, Lab. This exam will be offered by Microsoft via direct remote proctoring at select Microsoft facilities worldwide.

I think while its great to have the option to take the exam in a facility in the region, but the absence of 3 week intensive training including meeting and being tutored by some people of the Exchange team and meeting Exchange fellows from all over the world seems a big miss. Also, how will the market respond to MCM’s who did the 3-week training versus MCM’s who didn’t; would the latter be considered inferior or less knowledgeable? If I had the choice, I’d go through the additional 3 weeks of training, extending your network and having a chance to ask your questions at the source.

Thanks to people like Dave Stork and Jeff Guillet for live reports through Twitter (#msteched).

You can watch a recording of Greg Taylor’s session on SP2 features here. The official related Exchange team blog is here. More information on the new Microsoft Certified Master: Exchange Server 2010 program here; the original announcement is here.

Besides all this, a recording of Scott Schnoll’s session on Exchange 2010 Tips & Tricks can be viewed here.

(Updated on May 17th with session links)

Virtualized Exchange 2010 SP1 UM, DAGs & Live Migration support


Today, just before TechNet North America 2011, Microsoft published a whitepaper on virtualizing Exchange 2010, “Best Practices for Virtualizing Exchange Server 2010 with Windows Server® 2008 R2 Hyper V“.

There are some interesting statements in this document which I’d like to share with you, also after the Exchange team published an article on supported scenarios regarding virtualized shortly shortly after this paper was published.

First, as Exchange fellow Steve Goodman blogged about, a virtualized Exchange 2010 SP1 UM server role is now supported, albeit under certain conditions. More information on this at Steve’s blog here.

The second thing is that live migration, or any form of live migration offered by products validated through the Windows Server Virtualization Program (SVVP) program, is now supported for Exchange 2010 SP1 Database Availability Groups. Until recently, the support statement for DAGs and virtualization was:

“Microsoft does not support combining Exchange high availability (DAGs) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn’t employ clustered root servers, or the clustered root servers have been configured to never failover or automatically move mailbox servers that are members of a DAG to another root server.”

The Microsoft document on virtualizing Exchange Server 2010 states the following on page 29:

“Exchange server virtual machines, including Exchange Mailbox virtual machines that are part of a Database Availability Group (DAG), can be combined with host-based failover clustering and migration technology as long as the virtual machines are configured such that they will not save and restore state on disk when moved or taken offline. All failover activity must result in a cold start when the virtual machine is activated on the target node. All planned migration must either result in shut down and a cold start or an online migration that utilizes a technology such as Hyper-V live migration.”

The first option, shutdown and cold start,  is what Microsoft used to recommend for DAGs in VMWare HA/DRS configurations, i.e. perform an “online migration” (e.g. vMotion) of a shut down virtual machine. I blogged about this some weeks ago here since VMWare wasn’t always clear about this. Depending on your configuration, this might not be a satisfying solution when availability is a concern.

The online migration statement is new as well as the host-based fail-over clustering. In addition, though the paper is aimed at virtualization solutions based on Hyper-V R2, the Exchange Team article is more clear on supported scenarios for Exchange 2010 SP1 with regards to 3rd party products (VMware HA); if the product is supported through the SVVP program, usage of Exchange DAGs are supported. Great news for environments running or considering virtualizing their Exchange components.

Be advised that in addition to the Exchange team article, the paper states the following additional requirements and recommendations as best practice:

  • Exchange Server 2010 SP1;
  • Use Cluster Shared Volumes (CSV) to minimize offline time;
  • The DAG node will be evicted when offline time exceeds 5 seconds. If required, increase the heartbeat timeout to maximum 10 seconds;
  • Implementation of latest patches for the hypervisor;
  • For live migration network:
    – Enable jumbo frames and make sure network components support it;
    – Change receive buffers to 8192;
    – Maximize bandwidth.

Note that on May 17th the DAG support statement for Exchange 2010 SP1 on TechNet was updated to reflect this. However, the last two sentences might restart those “are we supported” discussions again:

“Hypervisor migration of virtual machines is supported by the hypervisor vendor; therefore, you must ensure that your hypervisor vendor has tested and supports migration of Exchange virtual machines. Microsoft supports Hyper-V Live Migration of these virtual machines.”

So, if vendor A, e.g. VMWare, has tested and supports vMotioning DAGs with their hypervisor X, Microsoft will support Live Migration for virtual machines on hypervizor X using Hyper-V? Now what kind of statement is that?

(Updates: May 16th – statements from EHLO blog, May 17th – mention updated TechNet article)

Thoughts on “Five things that annoy me about Exchange 2010”


An article on SearchExchange by Greg Shields, MVP Remote Desktop Services, VMWare vExpert and a well-known writer and speaker, covered Greg’s annoyances with Exchange Server 2010. I doubt these points will be in the top 5 annoyances of the common Exchange 2010 administrator, apart from some arguments being flawed. Here’s why.

Role Bases Access Control Management
First, the article complains about the complexity of Exchange 2010’s Role Based Access Control system, or RBAC for short. It’s clear what RBACs purpose is, managing the security of your Exchange environment using elements like roles, groups, scopes and memberships (for more details, check out one of my earlier posts on RBAC here). RBAC was introduced with Exchange 2010 to provide organizations more granular control when compared to earlier versions of Exchange, where you had manage security using a (limited) set of groups and Active Directory. In smaller organizations the default setup may – with a little modification here and there – suffice.

For other, larger organizations this will enable them to fine-tune the security model to their business demands. And yes, this may get very complex.

Is this annoying? Is it bad that it can’t be managed from the GUI in all its facets? I think not for two reasons. First is that Exchange administrators should familiarize themselves with PowerShell anyway; it is here to stay and the scripting language of choice for recent Microsoft product releases. Second is that, in most cases, setting RBAC up will be a one-time exercise. Thinking it through and setting it up properly is just one of the aspects of configuring the Exchange 2010 environment. Also, I’ve seen pre-Exchange 2010 organizations with delegated permission models that also took a significant amount of time to fully comprehend, beating the authors “20 minute test” easily.

DAGs three server minimum
The article talks about the requirement to put the 3rd copy in a Database Availability Group (DAG) on a “witness server” in a separate site to get the best level of high availability. It looks like the author mixed things up as there’s no such thing as a “witness server” nor a requirement to host anything as such in a separate site. In DAG, there’s a witness share (File Share Witness or FSW for short) which is used to determine majority for DAG configurations with an even number of members. This share is hosted on a member server, preferably as part of the e-mail infrastructure, e.g. a Hub Transport server, located in the same site as the (largest part of) population resides.

Also, there’s no requirement for a 3rd DAG member when high availability is required. The 3rd DAG member is a Microsoft recommendation when using JBOD storage. For disaster recovery a remote 3rd DAG member could make sense, but then you wouldn’t require a “witness server” given the odd number of DAG members.

Note is that there’s a difference between high availability and disaster recovery. Having multiple copies in the same site is to offer high availability; having remote copies is to provide resilience.  More information on DAGs and the role of the File Share Witness in my Datacenter Activation Coordination article here.

Server Virtualization and DAGs
Next, the article continues that Exchange 2010 DAGs don’t support high availability options provided by the virtualization platform, which is spot on. Microsoft and VMWare have been squabbling for some time over DAGs in combination of with VMWare’s HA/DRS options, leading to the mentioned support statement from Microsoft.  VMWare did their part by putting statements like “VMware does not currently support VMware VMotion or VMware DRS for Microsoft Cluster nodes“; what doesn’t help is putting this in the best practices guide as a side note on page 64. More recently, VMWare published a support table for VMWare HA/DRS and Exchange 2010 indicating a “YES” for VMWare HA/DRS in combination with Exchange 2010 DAG. I hope that was a mistake.

In the end, I doubt if DAGs being non-supported in conjunction with VMWare HA/DRS (or similar products from other vendors) will be a potential deal breaker, like the author states. That might be true for organizations already utilizing those options as part of their strategy. In that case it would come down to evaluating running Exchange DAGs without those options (which it happily will). Not only will that offer organizations Exchange’s availability and resilience options with a much greater flexibility and function set than a non-application aware virtualization platform would, it also saves you some bucks in the process as well. For example, where VMWare can recover from data center or server failures, DAGs can also recover from database failures and several forms of corruption.

Exchange 2010 routing
The article then continues with Exchange 2010 following Active Directory sites for routing. While this is true, this isn’t something new. With the arrival Exchange 2007, routing groups and routing group connectors were traded for AD sites to manage routing of messages.

The writers annoyance here is that Exchange must be organized to follow AD site structure. Is that bad? I think not. Of course, with Exchange 2003 organizations could skip defining AD sites so they should (re)think their site structure anyway since more and more products use AD site information. I also think organizations that haven’t designed an appropriate AD site structure following recommendations may have issues bigger than Exchange. In addition, other products like System Center also rely on a prope AD site design.

Also, when required organizations can control message flow in Exchange using hub sites or connector scoping for instance. It is also possible to override site link costs for Exchange. While not all organizations will utilize these settings, they will address most needs for organizations. Also, by being site-aware, Exchange 2010 can offer functionality not found in Exchange 2003, e.g. autodiscover or CAS server/CAS Arrays having site-affinity.

Ultimately, it is possible to set up a separate Exchange forest. By using a separate forest with a different site structure, organizations can isolate directory and Exchange traffic to route it through different channels.

CAS High Availability complicated?
The last annoyance mentioned in the article is about the lack of wizards to configure CAS HA features, e.g. to configure a CAS array with network load balancing like the DAG wizard installs and configures fail-over clustering for you. While true, I don’t see this as an issue. While setting up NLB was not too complex and fit for small businesses, nowadays Microsoft recommends using a hardware load balancer, making NLB of less importance. And while wizards are nice, most steps should be performed as part of a (semi)automated procedure, e.g. reconfiguring after a fail/switch-over. This procedure or script can be tested properly, making it less prone to error.

The article also finds network load balancing and Windows fail-over clustering being mutually exclusive an annoyance. Given that hardware load balancing is recommended and cost effective, supported appliances became available this restriction is becoming a non- issue.

More information on configuring CAS arrays here and details on NLB with clustering here.

Final words
Now don’t get the impression I want to condemn Greg for sharing his annoyances with us. But when reading the article I couldn’t resist responding on some inaccuracies, sharing my views in the process. Most important is that we learn from each other while discussing our perspectives and views on the matter. Having said that, you’re invited to comment or share your opinions in the comments below.