Key Takeaways
- Systematic A/B testing of B2B emails can drive 30-40% or more relative lifts in open rates by optimizing high-impact levers like subject lines, sender name, and send time.
- Treat open rate tests like real experiments: test one variable at a time, use statistically valid sample sizes, and run tests long enough to capture your buyers' normal workweek patterns.
- Average B2B email open rates sit around 19-21%, with cold outreach benchmarks closer to 27-28%, so even a 2-3 point lift can translate into hundreds of additional opens and replies for active SDR programs.
- Personalized and highly relevant subject lines (name, role, pain point, or trigger event) consistently outperform generic ones and should be the first thing your SDR team tests.
- A/B testing shouldn't stop at subject lines, sender name, preview text, and send time can all be optimized, especially for cold outbound sequences.
- Open rates are a directional metric, not the finish line, always connect your A/B tests back to replies, meetings booked, and pipeline created.
- For teams without time or bandwidth to run disciplined testing, outsourcing to a specialized partner like SalesHive can accelerate learning and lift open rates across your outbound programs.
B2B email open rates hover around 19-21% on average, with cold outreach slightly higher but significantly harder to optimize. By running disciplined A/B tests on subject lines, sender name, and send times, teams that systematically test see up to 37% higher open rates and nearly 50% better click-throughs, directly feeding pipeline. In this guide, you’ll learn exactly what to test, how to run statistically sound experiments, and how to turn open-rate wins into booked meetings and revenue.
Introduction
If your team is still writing B2B email subject lines by gut feel, you’re leaving pipeline on the table.
Average B2B email open rates hover around 19-21%, and cold outreach benchmarks land closer to 27-28%. That means more than two‑thirds of your prospects never even see your carefully crafted pitch. The lever that changes that, without doubling your SDR headcount, is disciplined A/B testing.
In this guide, we’ll break down how top outbound teams use A/B testing to systematically lift open rates, what variables actually move the needle in B2B, and how to run tests that produce real learnings instead of pretty noise. We’ll also talk about how to connect open‑rate wins to replies, meetings, and revenue so your team doesn’t get stuck chasing vanity metrics.
Why Open Rate Still Matters (Even in 2025)
The reality: open tracking is messy, but still useful
Open rate has taken some hits over the last few years. Apple Mail Privacy Protection, image blocking, and client quirks all distort open tracking because it typically relies on a tiny tracking pixel being loaded. Many clients block that by default, and some (like Apple Mail) pre‑load images, artificially inflating opens.
So why are we still talking about open rate?
Because for B2B sales teams, open rate is still a directional signal, a way to compare two versions of the same email, sent to similar audiences, and see which one wins the inbox battle. You shouldn’t use it as an absolute measure of campaign success, but you absolutely should use it to:
- Identify subject lines that consistently pull more attention
- Compare send‑time windows for the same segment
- Detect deliverability issues when rates suddenly tank
Think of open rate as your first filter: if they don’t open, they can’t reply. But it’s not the finish line.
Benchmarks: what ‘good’ looks like in B2B
Recent data pegs average email open rates at around 21.5% across industries, with B2B campaigns a bit lower at roughly 19.2%. A 2025 B2B deliverability report shows overall B2B email open rates around 20.8%, and cold email opens averaging 27.7% (with top performers hitting north of 40%).
Where does that leave you?
- If your warm B2B campaigns (newsletters, nurture, customer marketing) are consistently under 18%, you’ve got clear upside with testing.
- If your cold outbound sits under 20-22%, you’re almost certainly underperforming what’s possible with good lists and solid A/B work.
The good news: companies that regularly A/B test their emails see 37% higher open rates and 49% better click‑through rates than those that don’t. That’s not magic, it’s just compounding small wins.
What Actually Drives B2B Email Open Rates?
In B2B, prospects aren’t opening your email because it’s pretty. They’re opening because: who it’s from looks relevant, when it arrives fits their workflow, and the subject line makes it worth the click.
1. Subject line: your biggest lever
One study found that 32.72% of email recipients say the subject line is the most important factor in deciding whether to open, edging out even the sender’s name. That’s your main battlefield.
A few data points worth internalizing:
- Top‑performing subject lines average about 43.85 characters, long enough to be descriptive, short enough to scan.
- Personalized subject lines can drive 26-50% higher open rates than generic ones, depending on the dataset and context.
- Systematic subject line testing routinely yields 5-6 percentage‑point bumps in open rate in newsletter and outbound contexts.
What matters even more than the exact number? Relevance and clarity. In B2B, your best‑performing subject lines usually:
- Reflect a problem your persona actually owns (e.g., 'Missed QA deadlines last quarter?')
- Reference their role or environment ('For VPs of RevOps dealing with messy Salesforce data')
- Hint at a concrete outcome ('Cut deployment rollbacks by 30%')
Clever can work. But clear almost always works.
2. Sender name: the overlooked variable
Most teams obsess over subject lines and ignore the 'from' field, which is a mistake. One analysis of Litmus data shared in email communities indicates that around 42% of recipients look at the sender name first, versus 34% who look at the subject line.
In B2B outbound, testing the sender can be huge:
- 'Acme Corp' vs 'Jordan at Acme'
- Generic team inbox vs a named AE or SDR
- Founder/exec as sender for top‑tier accounts
It’s common to see double‑digit relative lifts in open rate when you move from a faceless brand to a real human with a recognizable title.
3. Timing: when you show up in the inbox
Timing isn’t everything, but it’s not nothing.
HubSpot found that 47.9% of B2B marketers report their best engagement between 9 a.m. and 12 p.m. Other analyses of B2B cold email show:
- Best days: Tuesday–Thursday, with Wednesday slightly on top
- Best open‑rate window: roughly 12-4 p.m., peaking around 41% opens in one large cold outreach study
Your audience may behave differently, that’s why we test, but mid‑week and mid‑day are strong starting hypotheses for open‑rate experiments.
4. Audience and offer: you can’t A/B test your way out of a bad list
No amount of subject line wizardry fixes:
- The wrong personas
- Stale or inaccurate data
- Irrelevant offers
Before you worry about squeezing 3% more open rate with clever tests, make sure your list and offer are tight:
- Clear ICP
- Verified contacts and domains
- Message that maps to a real, current pain point
Teams like SalesHive bake this into their process by pairing rigorous list building with message testing, that way you’re testing good hypotheses against the right people, not polishing spam.
How to Run Statistically Sound A/B Tests (Without a PhD)
Let’s get practical. A/B testing sounds fancy, but at its core it’s simple: send two versions, see which performs better, and make sure the difference isn’t just random luck.
Here’s a field‑tested framework you can hand to your SDR manager today.
Step 1: Pick one clear objective
For B2B outbound, your objectives might be:
- Lift open rate (subject line, sender, timing tests)
- Lift positive reply rate (messaging, offer framing)
- Lift meeting‑booked rate (CTA, friction, follow‑up cadence)
For this article, we’re focused on A/B tests where open rate is the primary metric, but you should always watch replies and meetings as secondary metrics.
Step 2: Write a simple hypothesis
Good: 'Personalizing subject lines with job title will increase opens among Director+ prospects by at least 15% without reducing positive reply rate.'
Bad: 'Let’s try something more fun and see what happens.'
A written hypothesis forces you to think about why a variant should win, which also makes it easier to interpret the results and decide what to test next.
Step 3: Test one variable at a time
Yes, it’s tempting to change three things and hope for a miracle. No, you shouldn’t.
Common open‑rate test variables:
- Subject line wording (personalized vs generic, benefit‑driven vs curiosity‑driven)
- Sender ('Brand' vs 'Rep at Brand')
- Send time (9-11 a.m. vs 1-3 p.m.)
- Preview text (reinforce value vs add urgency)
Hold everything else constant, list, day, copy, CTA. That way, when one variant wins, you know why.
Step 4: Get your sample size right
A/B tests with tiny lists produce 'winners' that fall apart as soon as you scale. Several testing guides and providers recommend at least ~1,000 recipients per variant for meaningful detection of moderate lifts, especially when baseline open rates hover around 20%.
Rough practical rules:
- Under 2,000 contacts total in a segment: testing will be noisy. Run bold tests and look for big swings, not tiny differences.
- 2,000-5,000 contacts: 50/50 split, one variable, subject line only.
- 5,000+ contacts: you can start doing 20/20/60 splits (20% see variant A, 20% see B, remaining 60% get the winner).
If you’re sending tens of thousands of cold emails per month (very common at SalesHive scale), hitting these thresholds is easy. If not, be more conservative about your conclusions.
Step 5: Run the test for a full business cycle
Don’t declare a winner after a few hours.
Because engagement patterns vary by day of week and internal schedules, run each test at least:
- 5-7 business days for general B2B outbound
- Longer if your audience is lumpy (e.g., IT teams that batch emails on certain days)
Stopping early because one subject line jumps ahead on day one is how you end up rolling out fake winners.
Step 6: Choose a winner using more than just open rate
When your minimum sample size is hit and the test has run through a full cycle, compare:
- Open rate (primary for these tests)
- Positive reply rate
- Meeting‑booked rate
- Unsubscribe and spam complaints
If variant B lifts open rate by 10% but cuts reply rate or spikes unsubscribes, treat it as a failed test, not a win.
High-Impact Open-Rate Tests for B2B SDR Teams
Let’s talk about specific tests that consistently drive better open rates in outbound programs.
Test 1: Personalized vs generic subject lines
Hypothesis: Adding relevant personalization to subject lines will increase opens.
Examples (for a DevOps tooling company):
- A: 'Cut deployment rollbacks this quarter'
- B: 'Cut deployment rollbacks at {{Company}} this quarter'
Or for sales ops software:
- A: 'Cleaning up your CRM before Q4'
- B: '{{FirstName}}, cleaning up your CRM before Q4'
Why it works: Multiple studies show personalized subject lines can lift opens by 20-50% depending on context. In practice, even a 10-15% relative lift is huge in cold outbound.
Watch out for:
- Bad data (wrong names, weird capitalization)
- Overly cutesy personalization that undercuts credibility in serious industries
Test 2: Clarity vs curiosity
Marketers love clever subject lines. Buyers often just want to know what’s inside.
Hypothesis: Clear, benefit‑driven subject lines will outperform vague curiosity plays for time‑poor B2B buyers.
Examples:
- A (curiosity): 'This surprised your competitors'
- B (clarity): 'How SaaS RevOps teams cut forecast errors by 18%'
Or:
- A: 'Quick question about your roadmap'
- B: 'Cutting QA cycle time for your engineering team'
In many B2B tests (including community‑shared HubSpot examples), benefit‑driven lines have beaten curiosity‑driven ones by 20-25%. Your audience may differ, but you won’t know until you test.
Test 3: Human sender vs brand sender
Hypothesis: Prospects are more likely to open an email that looks like it came from a real person instead of a faceless brand.
Examples:
- A: From 'Acme Corp'
- B: From 'Jordan at Acme'
or for founder‑led outreach:
- A: From 'Acme Marketing'
- B: From 'Lisa, Co‑founder @ Acme'
Expected outcome: In many email case studies, switching to a named sender yields 10-20% relative lifts in open rate and better replies, especially for mid‑ and bottom‑of‑funnel lists.
Test 4: Mid-morning vs early afternoon sends
Hypothesis: For B2B decision‑makers, mid‑morning or early afternoon sends will drive higher opens than first thing Monday or end‑of‑day Friday.
Test windows (local time):
- Variant A: 9-11 a.m.
- Variant B: 1-3 p.m.
Roll this across a few weekdays. Use the same email and segment; only vary timing.
Given that nearly half of B2B marketers report best engagement from 9-12 and cold email studies show a 12-4 p.m. open peak, you may find different windows work for your personas. The point is not to copy a 'best time' blog post, it’s to test your way to your own best time.
Test 5: Preview text that adds value vs repeats the subject
A lot of tools default the preview text to the first line of your email, or worse, show 'View this email in your browser.' That’s wasted real estate.
Hypothesis: Intentional, value‑driven preview text will increase opens compared to generic or repeated subject lines.
Examples:
- Subject: 'Cut deployment rollbacks this quarter'
- Weak preview: 'Hi {{FirstName}}, hope you’re doing well...'
- Strong preview: 'Teams like {{PeerCompany}} reduced incidents 22% with one change.'
- Subject: 'Cleaning up your CRM before Q4'
- Weak preview: 'View this email in your browser.'
- Strong preview: '3 low‑lift fixes RevOps leaders use to clean Salesforce fast.'
This doesn’t require a massive test, just two variants and a few thousand sends to see which gets more opens.
Turning Open-Rate Wins into Revenue (Not Just Better Dashboards)
Optimizing for open rate alone is how you end up with click‑bait outbound that burns your domain and annoys your market. The goal is better conversations, not just better numbers.
Here’s how to keep your testing grounded in revenue.
Track the full funnel for every variant
For each A/B test, track at least:
- Open rate
- Positive reply rate (excluding OOO and hard negatives)
- Meeting‑booked rate
- Opportunities created / pipeline value (where possible)
A variant only qualifies as a real 'winner' if it:
- Increases or maintains open rate and
- Does not hurt replies, meetings, or pipeline
In many cases, you’ll find that a subject line with slightly lower open rate drives more qualified replies because it filters out low‑intent opens. That’s a trade you want.
Combine small lifts for big impact
Consider a team sending 50,000 cold emails per month:
- Baseline cold open: 25%
- Baseline positive reply: 3%
Now you:
- Run a subject line test that lifts opens by 10% relative (to 27.5%).
- Run a sender test that adds another 8% relative lift (to ~29.7%).
- Run a timing test that adds 5% relative lift (to ~31.2%).
That’s ~6,100 more opens per month, if reply quality holds, you’re talking dozens or hundreds more conversations without increasing volume.
There’s a classic example from HubSpot where a 0.53% increase in open rate from subject line testing generated 131 additional leads in one campaign. Tiny percentages, big outcomes.
Make experimentation a habit, not a one-off
The teams that win with A/B testing treat it like part of the job, not a side project.
Practical rhythm for an SDR org:
- Monthly: Pick 1-2 key hypotheses to test on your highest‑volume sequences.
- Weekly: Review results in your pipeline or RevOps meeting for 10 minutes.
- Quarterly: Roll up learnings into updated messaging guides and onboarding.
Over a year, that’s 12-24 focused experiments, more than enough to radically evolve your subject lines and open‑rate performance.
How This Applies to Your Sales Team
Let’s map this to real sales orgs.
If you’re a small team (1-3 SDRs)
You don’t have endless volume, so keep it simple:
- Run one A/B test at a time on your main outbound sequence.
- Focus on big levers: subject line angle, sender name, and broad timing windows.
- Document every test in a shared spreadsheet: hypothesis, segment, variant A/B, results, decision.
You might only run a test every 4-6 weeks, that’s fine. The discipline matters more than the velocity.
If you’re a mid-size team (4-15 SDRs)
You’ve got enough volume to get serious:
- Assign one SDR or RevOps owner to manage the experiment backlog.
- Run 2-3 tests in parallel across different sequences (e.g., new prospects, follow‑ups, expansion).
- Use your engagement platform’s reporting (Outreach, Salesloft, Apollo, HubSpot, etc.) to segment results by persona and industry so you don’t overgeneralize.
Over time, you’ll build persona‑specific 'plays', e.g., 'for CFOs, lead with risk and compliance; for VPs of Sales, lead with quota and ramp time.'
If you’re a large or distributed team
At scale, your challenge isn’t volume, it’s consistency.
You’ll want to:
- Create a central experimentation framework (how tests are proposed, approved, run, and documented).
- Limit who can change core sequences so you don’t accidentally invalidate live tests.
- Use RevOps or marketing ops to own analysis and roll‑out of winners.
This is where partners like SalesHive can be especially useful, they already have process, volume, and tooling in place to run dozens of tests across industries and feed those learnings back into your program.
Conclusion + Next Steps
Open rate isn’t a vanity metric, it’s the front door to your cold outbound. In a world where B2B email benchmarks hover around 19-21% and cold outreach sits in the high 20s, every percentage point of open‑rate lift compounds into more conversations and more pipeline.
The catch: you don’t get those lifts by guessing. You get them by systematically A/B testing high‑impact variables, subject lines, sender, timing, and preview text, with clean hypotheses, adequate sample sizes, and an eye on replies and meetings, not just dashboard vanity.
If you take nothing else from this guide, start here:
- Benchmark your open rates by segment.
- Pick one high‑volume sequence.
- Launch a simple A/B test on the subject line or sender name.
- Run it for at least a week with a meaningful sample.
- Roll out the winner and log what you learned.
Repeat that process every month and you’ll be miles ahead of competitors still blasting the same subject line they wrote two years ago.
And if you’d rather skip the learning curve and plug into a team that’s already run thousands of these tests across 1,500+ B2B companies, SalesHive can bring that playbook, and the SDR horsepower, straight into your outbound engine.
📊 Key Statistics
Expert Insights
Treat Open Rate as a Directional Metric, Not the Goal
Because modern privacy features and image blocking distort open tracking, use open rate as a *relative* signal to compare subject lines or send times, not as your primary success metric. Anchor every A/B test to downstream outcomes like positive replies, meetings booked, and opportunities created so your SDR team doesn't over-optimize for vanity metrics.
Start with Subject Lines, Then Move to Sender and Timing
Subject lines influence roughly a third of open decisions, so they should be your first testing focus. Once you've found a reliable baseline, test the sender name (e.g., 'Alex at Company' vs brand-only) and send-time windows to squeeze out additional incremental lifts without rewriting copy every week.
Use Hypotheses, Not Hunches
Before you launch a test, write down a simple hypothesis like 'Personalizing by job title will increase opens among VPs by 10%+' and design the experiment to validate it. This forces your team to think about the buyer, keeps tests small and focused, and makes it easier to turn each experiment into a repeatable play for the entire outbound program.
Protect Deliverability While You Test
A subject line that spikes opens but trips spam filters is a losing trade. Monitor bounce rates, spam complaints, and inbox placement alongside open rate for every variant, and kill any treatment that degrades deliverability, even if its apparent open rate looks exciting in the short term.
Codify Wins into Your Sales Playbook
A/B tests only pay off if the learnings are captured and reused. Create a simple internal 'subject line hall of fame' and annotate each winner with context, persona, offer, segment, and channel, so new SDRs can ramp faster using proven language instead of reinventing every sequence from scratch.
Common Mistakes to Avoid
Testing five things at once in one email (subject, sender, offer, CTA, and send time).
When you change multiple variables, you can't tell which one actually moved your open rate or reply rate, so you learn nothing usable for the next campaign.
Instead: Limit each test to a single variable per audience and run it long enough to get clean data. Once you have a clear winner, lock it in and move to the next variable.
Declaring a winner after sending to a tiny sample size.
Random noise looks like a 'big win' when you're only testing on a few hundred recipients, leading you to roll out subject lines that don't actually perform.
Instead: Aim for at least ~1,000 recipients per variant whenever possible, especially for small lifts, and use consistent list segments so results are statistically meaningful.
Optimizing purely for opens instead of replies or meetings.
Click-bait subject lines can inflate opens while depressing response quality, so your pipeline doesn't actually improve even though the dashboard looks better.
Instead: Track positive reply rate, meeting rate, and opportunity creation by variant. Only call a test a 'win' if it improves or maintains downstream metrics, not just opens.
Ignoring segmentation and blasting the same test to everyone.
Executives, managers, and practitioners respond to different language and value props; lumping them together can hide winning variants for key personas.
Instead: Segment by role, industry, or stage in the funnel, then run targeted tests within each segment so you can build persona-specific subject line and timing playbooks.
Running A/B tests without documenting hypotheses or learnings.
Teams end up re-testing the same ideas, and knowledge walks out the door when SDRs leave, slowing down optimization.
Instead: Create a simple experiment log that tracks the hypothesis, variants, segment, results, and decision, and make reviewing it part of your weekly sales meeting.
Action Items
Define your B2B email open-rate benchmarks by segment.
Pull last 3-6 months of data by campaign type (cold outbound, nurture, product updates) and segment (role/industry). Use these as baselines so every test has a clear 'beat this' target instead of guessing.
Launch a subject line A/B test on your highest-volume outbound sequence.
Choose one clear variable (e.g., personalized vs non-personalized subject line), split traffic 50/50 for at least a few thousand sends, and measure opens, replies, and meetings set before rolling out the winner.
Test sender name variations on a warm or mid-funnel list.
Compare emails from 'First Last at Company' vs 'Company' alone and see which yields higher opens and replies. If a personal sender wins, standardize it across your main SDR sequences.
Run a send-time test across two or three time windows.
For 2-3 weeks, randomly assign prospects in the same segment to receive the same email at different windows (e.g., 9-11 a.m. vs 1-3 p.m.) and see which drives stronger open and reply rates.
Build a shared 'winning subject line' library for your team.
Every time a test produces a statistically meaningful win, add the subject line, segment, and performance to a central doc or CRM note so SDRs can reuse proven language instead of guessing.
Add A/B test reviews to your weekly sales or RevOps meeting.
Spend 10 minutes reviewing what was tested, what won, and what will be tested next so experimentation becomes a rhythm, not a one-off project.
Partner with SalesHive
On the email side, SalesHive’s US‑based and Philippines‑based SDR teams systematically test subject lines, sender names, and send times across massive datasets, quickly identifying what actually drives open rates and replies in your specific market. Their AI‑powered personalization engine, eMod, plugs in at the template level to dynamically tailor subject lines and first lines at scale, turning what used to be manual copy tinkering into a data‑driven optimization loop.
Because SalesHive also owns list building and prospect research, they don’t just test copy in a vacuum, they test how different messages perform by persona, industry, and trigger event, then roll the winners into your ongoing outbound strategy. The result is a fully managed SDR program where A/B testing, list quality, and consistent execution all work together to lift open rates, increase positive replies, and, ultimately, put more qualified meetings on your team’s calendar, without locking you into annual contracts.
❓ Frequently Asked Questions
Is A/B testing still useful for B2B open rates now that Apple Mail and privacy rules skew tracking?
Yes, as long as you treat open rate as a relative metric instead of an absolute truth. Privacy protections and image blocking mean open numbers are noisy, but they're still good for comparing two subject lines or send times sent to similar audiences. For B2B sales teams, use open rate to decide which variant wins the inbox, then validate that win by checking whether replies and meetings also increased.
What should B2B teams test first to improve email open rates?
Start with subject lines, because they're the biggest lever and influence roughly a third of open decisions. Once you've run several solid subject line tests, move on to sender name (human vs brand), preview text, and send-time windows like mid-morning vs early afternoon. This progression gives your SDRs fast wins while building a testing muscle that can later extend into email body copy and CTAs.
How big does my email list need to be for valid A/B testing?
For most B2B open-rate tests, aim for at least 1,000 recipients per variant if you're looking for modest improvements (like a 10-20% relative lift). Smaller lists can still be tested, but results will be noisier and you should look for large, obvious differences before rolling out a change. If your list is small, prioritize bolder experiments (e.g., radically different angles or formats) rather than tiny tweaks.
How long should I run an A/B test on a cold outbound sequence?
Most B2B teams should run each test long enough to cover a full business cycle, at least 5-7 business days, so you capture normal behavior across days of the week and avoid anomalies. For ongoing outbound sequences, you can keep the test live until both variants hit your minimum sample size, then roll out the winner and move on to the next hypothesis. Just don't stop the test early because one version looks ahead after only a day or two.
What metrics besides open rate should I track for email A/B tests?
For SDR and BDR teams, the most important metrics are positive reply rate, meeting-booked rate, and opportunities created. Open rate tells you whether you're winning attention; reply and meeting rates tell you whether that attention is turning into real sales conversations. Also keep an eye on bounce rate, unsubscribe rate, and spam complaints so you don't damage deliverability while chasing higher open numbers.
Should I personalize every B2B subject line with the prospect's first name?
Name personalization often helps, but it's not magic and can backfire if your data is messy or the message feels cheesy. In B2B, personalizing around role, company, or pain point (e.g., 'Cutting QA cycle time for VP Engineering at ACME') can be just as powerful. Treat first-name personalization as one test among many, not a default rule, and always pair it with a relevant, credible value proposition.
How many A/B tests should my SDR team run at once?
Most outbound teams are better off running one or two focused tests at a time, ideally on their highest-volume sequences, instead of sprinkling lots of tiny experiments everywhere. This keeps reporting clean, avoids confusion about which changes caused which results, and ensures your team actually implements the winners. Once you've built the habit and documentation, you can support more concurrent tests with RevOps or marketing ops support.
Can smaller B2B teams without marketing ops still run meaningful A/B tests?
Absolutely. Even a handful of SDRs can run simple tests inside tools like Outreach, Salesloft, Apollo, or HubSpot using built-in variant and sequence features. Start with one hypothesis per month, keep the test design simple (two subject lines, 50/50 split), and log results in a shared spreadsheet or Notion doc. If you don't have bandwidth to run and analyze tests in-house, a partner like SalesHive can take that experimentation off your plate.