Unknown's avatar

About Michel de Rooij

Michel de Rooij, with over 25 years of mixed consulting and automation experience with Exchange and related technologies, is a consultant for Rapid Circle. He assists organizations in their journey to and using Microsoft 365, primarily focusing on Exchange and associated technologies and automating processes using PowerShell or Graph. Michel's authorship of several Exchange books and role in the Office 365 for IT Pros author team are a testament to his knowledge. Besides writing for Practical365.com, he maintains a blog on eightwone.com with supporting scripts on GitHub. Michel has been a Microsoft MVP since 2013.

Forefront Security for Exchange SP2 RU3


The ForeFront team released Rollup 2 for Forefront Security for Exchange (FSE) Service Pack 2. This rollup which fixes and issue with version 8 of the Kaspersky antivirus engine, introduced with Rollup 2. The related knowledgebase article is kb2420644. Unfortunately the ForeFront Update page doesn’t mention the new Rollup (yet).

You can request the FSESP2RU3 here.

Exchange 2010 SP1 Rollup 1


Today the Exchange Team released Rollup 1 for Exchange Server 2010 Service Pack 1 (KB2407082). This update raises Exchange 2010 version number to 14.1.255.2.

Here’s the list of changes included in this rollup:

  • 2028967 Event ID 3022 is logged and you still cannot replicate a public folder from one Exchange Server 2010 server to another
  • 2251610 The email address of a user is updated unexpectedly after you run the Update-Recipient cmdlet on an Exchange Server 2010 server
  • 978292 An IMAP4 client cannot send an email message that has a large attachment in a mixed Exchange Server 2010 and Exchange Server 2003 environment
  • 982004 Exchange Server 2010 users cannot access the public folder
  • 983549 Exchange Server 2010 removes the sender’s email address from the recipient list in a redirected email message
  • 983492 You cannot view updated content of an Exchange Server 2010 public folder

When running ForeFront Protection for Exchange, make sure you disable ForeFront before installing the rollup and re-enable it afterwards, otherwise the Information Store and Transport services may not start. You can disable ForeFront using fscutility /disable and enable it by using fscutility /enable

You can download Exchange 2010 SP1 Rollup 1 here.

A Decade in High Availability


A recent post from Elden Christensen, Sr. Program Manager Lead for Clustering & High Availability, reminded me of one of my former employers. When I joined that company back in 2000 for starting up a professional services based on Windows Server 2000 Data Center Edition, the company was already an established professional services provider in the business critical computing niche market, e.g. Tandem/Compaq/HP NonStop systems, mostly used in the financial markets, e.g. banks or stock exchanges. The Windows platform was regarded as inferior at that time by the NonStop folks and they had good arguments back then.

Remember, those were also the early days where no one was surprised to see an occasional blue screen (people were also using Windows 9x) and what we now know as virtualization was already happening on mainframes in the form of partitioning. At that time, Microsoft with their Windows Server platform had ambitions to enter the data center environment, where the NonStop platform was an established platform for ages and professionals had developed best practices for those environments.

Another part of the discussion was the Fault-Tolerance  versus High Availability topic, where NonStop was already an established Fault Tolerant solution for business critical environments, Windows still had only ambitions to move towards that market with the Data Center product. A logical move, looking at the status of (web) applications, SQL and last but not least, Exchange and where it was going and what customers expected of those products regarding availability and reliability. To repeat an infamous quote of a NonStop colleague back then, “E-mail is not business critical”. But that was almost 10 years ago, things have changed .. or haven’t they?

Single Point of Failure
First I’ll start by introducing the availability concept, which revolves around eliminating the single point of failure. This is an element in the whole system of hardware, software and organization that can cause downtime for a system, i.e. disruption of services. After identifying a single point of failure, we want to eliminate it to prevent downtime which is, after all, the ultimate goal for a business critical system. We can approach this task using two different strategies, Fault Tolerant (FT) or High Availability (HA). The task of identifying and eliminating single points of failure is an ongoing process, as most IT environments are subject to change over time.

Availability
To understand the Fault Tolerant and High Availability strategies we need to define the term “Availability”. In the dictionary, availability is defined as the quality or state of being available or an available person or thing, where in both cases available means present or ready for immediate use. The availability is mostly expressed as a percentage, for example when used in a service level agreement, but what does that percentage mean? To explain this take a look at the following diagram:

Lifecycle

I assume this lifecycle speaks for itself. Using this diagram, the availability is calculated as follows: MTBF / (MTBF + MTTR). The related expected downtime is calculated as ( 1 – Availability% ) * 1 year. Note that the time between failure and recovery isn’t used in the calculation.

I’ll use a simple example, a 500 GB Seagate Barracuda 7200.12 (ST3500412AS) with a MTBF specification of 750,000 hours. You have a 24 hours replacement contract and need about 4 hours to restore the backup. The availability would then be 750,000 / ( 750,000 + 28 ) = 0.9999626680% resulting in a yearly downtime of ( 1 – 0.9999626680) * ( 365 days  * 24 hours * 60 minutes ) = 19,6 min.

Of course with hardware these numbers are theoretical and to some extent a marketing thing; how else can Seagate specify an MTBF of 750,000 hours ( 85 years ). I tend to look at it as an indication of the reliability you can expect. For example, compare the MTBF of 7200.12 drive with an enterprise class drive, Seagate’s ES product line. The ST3500320NS has an MTBF of 1,200,000 hours.

That’s the reason you should use enterprise class drives in your storage solution instead of desktop drives, which aren’t supposed to run in 24×7 environments. To add to that, the MTBF decreases when used in series (RAID 0 = 1 / (1/MTBF1 + .. 1/MTBFn)) or increases when used in parallel (RAID 1 = MTBF * ( 1 + 1/2 + 1/n) ) configurations. When trying to do calculations for the whole supply chain, with all the elements and their individual specifications and support contracts, this can get very complex.

The 9’s
imageWhen talking about availability this is often shown using a series of 9’s, e.g. 99.9%. The more 9’s it has, the better (less downtime). Note that for each increased level of availability, the required effort increases significantly. By effort, don’t think of technical solutions only. It also means organizational measures like having skilled personnel and proper procedures.

A fact is that only a small percentage of the causes for outage is technical, the majority of incidents is due to human error. And yes, that includes that bad driver which is programmed by humans. This is why changes in properly managed infrastructure should always go through test and acceptance procedures in environments representative or identical to the production environment. Unfortunately, this doesn’t always happen as not all IT departments have this luxury, mostly because of financial reasons.

Availability% Downtime / Year Downtime / Month
99.0% 3.65 days 7.3 hrs
99.9% 8.76 hrs 43.8 min
99.99% 52 min 4.3 min
99.999% 5.2 min 26 sec
99.9999% 31 sec 2.6 sec

Fault Tolerant
imageThe goal of a Fault Tolerant solution is to maximize the Mean Time Between Failure (MTBF). This is achieved by mirroring or replicating systems. These monolithic systems run software in parallel on identical hardware. This is called Lockstep (which, for your information, refers to synchronized marching).

Because Fault Tolerant systems run in parallel, the results of an operation can be compared. When the results don’t match, a fault occurs. Since the faulty system can’t be identified using 2 parallel systems, there’s also a variation to this architecture where one server functions as master and one as slave, the slave functioning as a hot-standby. To solve the ambiguity, you could use three systems where the majority of the systems determine the right output.

When faults are detected in a Fault Tolerant system, the failing component (or system) is disabled and the mirror takes over. This makes the experience transparent for the end-user. There is one caveat: since Fault Tolerant systems run software in parallel, software faults are also mirrored.

Examples of Fault Tolerant components are ECC RAM, multiple NICs in Fault Tolerant configuration, multipath network software, RAID 1+ disk systems or storage with replication technology. Examples of Fault Tolerant systems are HP NonStop (propriety), Stratus ftServer or Unisys ES7000. There are also software-based solutions like Marathon EverRun or VMWare’s FT offering.

High Availability
High Availability aims to maximize minimize the MTTR. This can be achieved by redundant or standby (cold, hot) systems or non-technical measures like on-site support contracts. Systems take over the functionality of the failing system after the failure occurred. Therefor, High Availability solutions aren’t always completely transparent for the user. The effects of a failing system and the consequences for end end-user depend on the software, e.g. a seamless reconnect or requirement to login again. Another point of attention is the potential loss of information caused by pending transactions being lost because of the failure. To make the experience more transparent for the user, application need to be resilient, e.g. detecting failure and retrying the transaction.

Examples of High Availability technologies are load balancing – software or hardware-based – and replication, where load balancing is used for static data and replication for dynamic data.

The Present
After a decade, technology has evolved but is still founded on old concepts. Network load balancing is still here and clustering (anyone remember Wolfpack?), although we moved from shared nothing to to replication technology, remains largely unchanged. This means either there hasn’t been much innovation or the technologies do a decent job; After all, it’s still a matter of demand and supply. Yes, we moved from certified configurations-only shared storage solutions to flexible Database Availability Groups (hey, this is still and Exchange blog), but most changes are in the added functionality category or to take away constraints, e.g. cluster modes (majority node set, etc.), multiple replicas and configurable replication.

Windows Server
Data Center Edition
x86 x64
2000 Max. 32 GB
32 CPUs
4 nodes
N/A
2003 SP2 Max. 128 GB
32 CPUs
8 nodes
Max. 512 GB
64 CPUs
8 nodes
2003 R2 SP2 Max. 64 GB
32 CPUs
8 nodes
Max. 2 TB
64 CPUs
8 nodes
2008 Max.  64 GB
32 CPUs
16 nodes
Max. 2 TB
64 CPUs
16 nodes
2008 R2 N/A Max. 2 TB
64 CPUs (256 logical)
16 nodes

What about Fault Tolerance and Windows’ Data Center Edition as the panacee for all your customers requiring “maximum uptime”? The issue with Fault Tolerant was that it came with a hefty price tag, especially in those days. Costs were an x-fold of the costs involved with High Availability solutions on decent (read: stable) hardware. So, for those extra 9’s you needed deep pockets. For example, around 2001 an Compaq ES7000 with Windows Server 2000 Data Center Edition, the joint-support queue (e.g. Microsoft and OEM) and services came with a $2m price tag for which you got the promise of 99,9% availability.

Compare that to buying a few Proliant’s with Windows Server 2000 Advanced Server, some Fault Tolerant components (FT NICs, RAID), off the shelf High Available technology and dedicated personnel (justifiable with that DCE price tag) for .. say, $250,000. With skilled personnel and operated in a controlled environment you could easily reach 99% availability. Is that price difference worth 3 days of downtime? Also, the simplicity to implement those technologies made High Availability in Windows accessible for the masses and now – certainly in the Exchange world – seldom see load balancing or forms of clustering not being utilized.

Note that in the past decade, I’ve never encountered Data Center for hosting Exchange. In fact, as of Exchange 2003, support for on Data Center was dropped. Nowadays, Data Center is regarded as an attractive option for large-scale virtualizations based on Hyper-V, not only because Data Center costs less than back then (about $3000 per CPU – hurray for multi core, but with a 2 CPU minimum) and runs certified on more hardware, but also because it comes with unlimited virtualization rights, meaning you may run Windows Server 2008 R2 (or previous version) Standard, Enterprise, and Datacenter in the virtual instances without the need to purchase additional licenses for those.

With all the large-scale virtualization and consolidation projects going on, virtualizing Exchange or other parts of your IT infrastructure, it’s good to know that there are other options when required by the business.

Retrieving Exchange version information


Last Update: v1.33, October 22nd, 2018

At some time you may want to create an overview of the current Exchange servers in your organisation and the current product levels. The attribute you initially might look at is AdminDisplayVersion, but unfortunately AdminDisplayVersion doesn’t reflect installed roll-ups.

The location that does contain update information is in the registry, more specific the installer subkey related to the installed product. The exact key you should be looking for is depends on whether Exchange Server 2007 or Exchange 2010 is installed. The path is HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Products\$PRODID\Patches\, where $PRODID is:

  • 461C2B4266EDEF444B864AD6D9E5B613 for Exchange 2007.
  • AE1D439464EB1B8488741FFA028E291C for Exchange 2010 or Exchange 2013.
  • 442189DC8B9EA5040962A6BED9EC1F1F for Exchange 2016 or Exchange 2019.

Here subkeys may exist for each applied roll-up.

Looking at the DisplayName we see it contains a full description of the roll-up, prepended with the related Exchange version. Distilling that information using a Powershell script should provide us with the required information.

Below you will find the script, Get-ExchangeVersion.ps1. When running the script, it will show all Exchange 2007 ( v8), Exchange 2010 (v14), Exchange 2013 (v15.0) and Exchange2016 (v15.1) servers with version information, but it will skip Edge server (due to potential firewall issues) or ProvisionedServer (‘server’ is a placeholder).

The output ($output) is sent to the console. You can easily make the script report to a CSV file by removing the comment in front of the line containing the export-CSV cmdlet. The output of Get-ExchangeVersion.ps1 looks something like this:

Feedback
Feedback is welcomed through the comments. If you got scripting suggestions or questions, do not hesitate using the contact form.

Download
You can download the script from the TechNet or from GitHub.

Revisions
Available at the TechNet Gallery.

Exchange Message Size Limits


While traveling trough your Exchange organization or beyond, e-mail messages may be imposed to all sorts of limitations. One of these limits is the message size limit, which can be set on the following levels:

  • Organizational Level
  • Send Connector
  • Receive Connector
  • AD Site Links
  • Routing Group Connectors
  • Individual

The path evaluated is as follows: User Send Limit > Receive Connector > Organization Checks > Send Connector > User Receive Limit

In general, the lowest size limit on an e-mail route determines if a message can be successfully transported from sender to the recipient. Exception is the individual setting which can override the other settings for internal messages. Strategy is to define limits where appropriate and as soon as possible. It’s a waste of resources to accept a message, send it throughout the organization via several hops, to finally reject the message because the recipient has a maximum receive size limit.

Organizational Level
The message size limits on the organization level can be set through the Exchange Management Console via Organization Configuration > Hub Transport > Global Settings by opening the Properties of Transport Settings:

TransportSettings

Of course, you can also view the settings using the Exchange Management Shell using Get-TransportConfig, e.g.

TransportSettings-EMS

As you can see, the default value in Exchange 2010 is 10240 (10 MB) for both receive as well as send message sizes. If you require a higher value, for example to enable people to send and receive larger attachments, you can use the EMC or Set-TransportConfig:

Set-TransportConfig –MaxReceiveSize 25MB –MaxSendSize 25MB

As you might expect, MaxReceiveSize applies to receive connectors, MaxSendSize applies to send connectors. Valid range for this setting is anywhere in the range between 64KB and 2GB or Unlimited. When set to Unlimited (which once was the default values in Exchange 2007 RTM), no limit will be imposed. I don’t recommend using Unlimited since it can lead to service disruption caused by processing large messages.

Send Connector
The message size limits on a send connector can be set through the Exchange Management Console via Organization Configuration > Hub Transport > Send Connectors by opening the Properties of the Send Connector:

SendConnector-EMC

You can also use Get-SendConnector to view the setting:

SendConnector

The default maximum sending message size for Exchange 2007/2010 send connectors is 10 MB. If you want to be able to send larger messages over this send connector, you can use the EMC or Set-SendConnector:

Set-SendConnector –Identity Internet –MaxMessageSize 25MB

Valid range for this setting is anywhere in the range between 64KB and 2GB or Unlimited.

Receive Connector
The message size limits on a send connector can be set through the Exchange Management Console via Server Configuration > Hub Transport by opening the Properties of the Receive Connector in the Receive Connectors pane:

ReceiveConnector-EMC

You can also use Get-ReceiveConnector:

ReceiveConnector

The default maximum receiving message size for receiving messages for Exchange 2007/2010 receive connectors is 10 MB. If you want to be able to receive larger messages over this receive connector, you can use the EMC or Set-ReceiveConnector:

Set-ReceiveConnector –Identity “MAIL1\Default MAIL1” –MaxMessageSize 25MB

Valid range for this setting is anywhere in the range between 64KB and 2GB or Unlimited.

AD Site Link
Messages travelling between Hub Transport servers are subject to AD site link limits. By default site links have no message size limit. You can view AD site link settings using Get-AdSiteLink or use Set-AdSiteLink to configure MaximumMessageSize when required.

ADSiteLink

Note that Hub Transport servers use least cost routing to route messages. When a message exceeds a site link limit, the message will not be delivered. Hub Transport servers will not try to deliver the message using a different route.

Routing Group Connectors
In a co-existence scenario you might have routing group connectors connecting Exchange 2007/2010 to an Exchange 2003 environment. Routing group connectors have no maximum message size limit by default.

To inspect a routing group connector maximum message size settings, use Get-RoutingGroupConnector:

Get-RoutingGroupConnector <ConnectorID> | FL Name, *Max*

To configure a maximum message size limit on a RGC use Set-RoutingGroupConnector:

Set-RoutingGroupConnector <ConnectorID> -MaxMessageSize 25Mb

Individual
You can create exceptions for the MaxReceiveSize and MaxSendSize values for mailbox users, mail-enabled contacts and distribution groups. By default no limits are imposed (i.e. Unlimited). To inspect the settings for a mailbox user, navigate to Recipient Configuration > Mailbox and open the Properties of the User. Activate tab Mail Flow Settings and open the Properties of the Message Size Restrictions settings:

UserMessageSizeSettings

or use the related cmdlet, e.g. Set-Mailbox UserID –MaxSendSize 1GB –MaxReceiveSize 1GB:

UserMessageSizeSettings-EMS

If you set an individual maximum send or receive size setting higher than the organization or connector limits, the individual setting will override those limits when the message is send internally, e.g the recipient resides in the same organization. This way you can create exceptions for certain individuals which require a higher message size limit.