The calculated deliverability score is intended to provide a accurate assessment of the proportion of e-mails deleted or corrupted by spam filters.
The Deliverability Score affords users several advantages that other delivery metrics in the combination do not offer:
Direct Metrics (Compared to Indirect Deliverability Metrics like IP Reputation or Complaints)
Easy interpretability (loss of response or number of missing opens can easily be translated into sales losses)
Applicable to individual e-mail campaigns and groups of e-mails (e.g., all in month N)
Validity shortly after the start of the dispatch
The basic idea of the calculation approach is that delivery problems always show up in a drop in the responses, and that this drop in the response rate is different from domain to domain because the spam filters of the individual recipient domains do not match each other (except for company groups like 1 & 1, which use same or similar spam filters for different domains).
If the drop of the response rate is similar for all major domains, then the probability of delivery problems is much lower than in the case of high fluctuations in the response rates.
For example, if the rate opening rate of t-online, web.de, gmail and hotmail campaigns are between 20-25% below the average, then factors such as subject, sender or time of mail may explain the drop more plausible than spamfilter problems. However, if the drop would be 80% for gmail, and the other domains remain at between 20-25%, then there is a high probability of a spam filter problem.
Therefore, the algorithm for calculating the deliverability score first selects the domain with the lowest intrusion in the response. With this domain, the probability of spam filter problems is obviously the lowest. This is the reference delivery rate that any other domain should be able to reach with good delivery. Then, for each other domain, the amount of the missing openings is calculated from the difference to the reference delivery rate and the share of the distribution list. The deliverability score is then calculated as an aggregated relative index number over all domains, and describes the proportion of the measured openings compared to the expected openings at optimal delivery.
This Method can be characterized with the Acronym
DROPS (Domain Relative Open Rate Score)
For calculation, opening rates per domain are needed that can be considered as normal. It is advisable to use the average opening rates, e.g. for the last 20 dispatches but special dispatches (such as campaigns to non-responsive contacts) should be omitted, since these can distort the average.
The data of multiple domains can also be grouped and specified in a residual group.
If the opening rate of a domain is significantly higher than the domains default rate, then a permanent delivery problem for this domain may have been eliminated. In this case, the algorithm learns because the domains reference rate continuously increases.
A campaign can certainly produce unfavorable opening rates while still delivering a high deliverability score. In this case, the below-average performance of the campaign is not due to delivery problems, but to other factors such as e.g. a badly chosen subject line.
If you choose average values of the last campaigns as the reference value for each campaign, be aware that they will decrease with multiple consecutive campaigns with delivery problems. An improvement in the score does not necessarily mean a solution to the problem, but merely that no further deterioration occurs. If you want to check whether a delivery problem has been resolved, you should determine the reference values from earlier campaigns where the problem has not yet occurred.
The Calculation method is a simplified model, where bouncerates are ignored. Delivery problems due to bounces can be identified by
. More Information about Bouncerates can be found