Jump to content

Recommended Posts

Posted (edited)

In 2024, I launched DEFCON 5, a deep-dive project aimed at answering a deceptively simple question:

When built for support and slotted for synergy… who brings more to the team — Defenders or Controllers?

 

The results were surprising, their implications confounding, and even a little controversial. Controllers edged out the win with better threat contribution, pet-enhanced pressure, and control-driven efficiency — but Defenders held their ground in resilience, team enablement, and raw reliability.

 

Now, it’s time to revisit that battlefield. With new builds. New support sets. And higher stakes.

 

Enter: DEFCON II: AFTERMATH

DEFCONAftermathbadge.thumb.png.0fdf47dc50f1328263e3af7c3778f448.png

 

This new series expands the original DEFCON trials with:

  • Five fresh Defender/Controller matchups

  • A spotlight on less commonly tested support sets

  • Full incarnate builds

  • And a new goal: to push both archetypes to their limits in AE 801 content, where a team wipe is the end condition

We’re no longer running radios and tip missions. These are battlefield stress tests — because if there's any gap in performance between ATs, it shows up at the margins.

 

The Support Sets Being Tested:

Each DEFCON test pairs one support set shared between both ATs, combined with contrasting secondaries. Builds blend a combination of engaging theme and high-performance synergy with epic pools, incarnate powers, and team-based contribution metrics in mind.

 

 

According to Cathy, these support sets are described thusly:

  • ❄️ Cold DominationThe Icy Engine of Buffed Control

    Cold Domination is one of the most well-rounded support sets in the game — a powerful buff/debuff hybrid that excels in both team sustain and enemy suppression. It front-loads its buffs through long-duration shields and stealth auras, then pivots into wide-area debuff layering that slows enemies to a crawl and drains them of power.

  • 💖 Empathy – The Gold Standard of Ally Empowerment

    Empathy is the quintessential pure support set — focused entirely on keeping your team alive, energized, and operating at maximum potential. While it has no direct debuffs or control tools, its suite of heals, resistance buffs, and performance-boosting effects make it one of the most impactful force multipliers in the game when used skillfully.

  • ☢️ Radiation EmissionThe Debilitating Core Meltdown

    Radiation Emission is a field-control and debuff powerhouse, built around area-denial toggles, ally-boosting pulses, and enemy-crippling effects that stack over time. While it offers minimal direct control, it compensates with wide-area suppression and scaling pressure mitigation.

  • 🌩️ Storm Summoning – Chaos as Crowd Control

    Storm Summoning is a high-disruption, high-risk support set that turns the battlefield into a turbulent, shifting stormfront. With minimal direct buffs and no hard control, it instead relies on repel, knockback, soft debuffs, and pseudopets to disorient and divide the enemy while cloaking allies in protective mist.

  • 🎯 Trick Arrow – The Tactical Debuff Arsenal

    Trick Arrow is a 100% debuff set built around terrain control, status disruption, and targeted weakening. With no direct heals, buffs, or hard crowd control (outside of a few strategic holds), Trick Arrow instead floods the field with stacking resistance shredders, speed killers, and soft-control effects that create windows of opportunity for teams to dominate.

 

What’s Being Measured?

Like in DEFCON 5, I’ll be tracking:

  • Survivability – Personal Defeats

  • Risk – Ally Defeats

  • Resilience – Damage Taken

  • Lethality – Foe Defeats

  • Threat – Damage Dealt

  • Efficiency - DMG out : DMG in

 

But with the addition of Sythlin's DPS Tool that allows real-time data parsing, I'll be looking at additional metrics that include:

  • Avoidance – Hits Taken

  • Disruption – Controls

  • Provision – Heals/End

 

All these in an effort to validate whether Controllers’ scaling still holds up — or if Defenders rise to the challenge when pressure is highest against incarnate content!

 

Ready to See What Happens When the Buffs Hit the Fan?

I’ll be releasing matchup breakdowns, build presentations, predictions, solo and team tests, and post-test analyses throughout the coming weeks — and your input is invaluable.

  • Which matchups do you think favor the Controller? The Defender?

  • Have you run support characters on incarnate teams lately? What gaps have you seen?

  • Do you agree with the original DEFCON 5 verdict, or are you ready to flip the script?

 

Follow this thread for updates, results, and test footage posted to YT to find out what happens AFTER THE BLAST RADIUS settles!

Edited by Dark Current
  • Like 1
Posted (edited)

Round One

Cold Domination

 

image.thumb.png.2b26568095bdfaf34f767e4f53bcefb6.png

 

“Your powers allow you to manipulate cold and ice to protect your allies and weaken your enemies.”
 

image.thumb.png.13adca2b845dd84e4bb6f9edc24e570b.png

 

 

The Challengers

 

Shimr

Ice / Cold / Ice CONTROLLER

image.png.e0c2a5914ed8a1d45dfb4adf07643082.png

 

Concept: Cryokineticist—slows time and matter through mental focus
Playstyle: blankets the battlefield in slow-motion effects, punishing any who try to move or act too quickly
 

VS
 

Gyr Falcon

Cold / Arch / NRG DEFENDER

image.png.597b60f34dbfdcd119c3c0132d6e2a28.png

 

Concept: Arctic avian sharpshooter—rains frost-tipped arrows from above
Playstyle: softens enemies with Cold, then picks them off with precision shots from range and altitude

Match Up Discussion Video

 

Edited by Dark Current
Posted (edited)

Shimr – Cryokinetic Controller of the Slow Horizon

image.thumb.png.0b9646a2d979f440325b742964cafd19.png

 

Shimr is a cryokineticist who doesn’t just wield cold — she manipulates time through it. Her control over molecular motion slows enemies to a crawl, dampens their reactions, and locks down the battlefield in a haze of frost and fear.

Combining Ice Control and Cold Domination, Shimr specializes in soft AoE lockdown, layered debuffs, and team-wide sustain through suppression. She adds Ice Mastery for even more zone control and ranged threat.

Build Identity:

  • Primary: Ice Control – AoE-focused immobilizes, holds, slows, and fear effects

  • Secondary: Cold Domination – Strong front-loaded shields, stealth auras, and debuff saturation

  • Epic Pool: Ice Mastery – Adds heavy cold DoTs, personal defense, and terrain denial

Tactical Strengths:

  • Field saturation with stacked slows and recharge debuffs

  • Persistent -RES, -DEF, -REGEN, and control layering via AoE patches

  • Excellent mitigation through pets, positioning, and stealth-enhanced shielding

  • High team uptime through +Recovery (Heat Loss), +Defense (Fog), and enemy softening

  • Hibernate as a panic button or tempo reset in tough fights

Shimr doesn’t aim to burn through enemies — she intends to outlast them, exhaust them, and immobilize them in place. Her playstyle rewards patience, battlefield awareness, and surgical deployment of slows, storms, and shields.

Watch for how Shimr uses zone control to break aggro patterns, split spawns, and enable DPS to safely shred slowed enemies.

Mids Build: Shimr - Controller (Ice Control).mbd

 

 

Build Discussion and Solo Strategy Video:

 

 

 

AE 801 Incarnate Team Trials Video!

 

 

Edited by Dark Current
  • Like 1
  • Thumbs Up 1
Posted

Gyr Falcon – Arctic Sentinel of the Stratosphere

image.thumb.png.039ebb0630b1bd8b7267d3c4ce3b1b19.png

 

A high-altitude scout from the icy edge of the world, Gyr Falcon fights from above — raining pinpoint strikes down from his aerial perch while shielding his team from harm. A master of Cold Domination, he opens every battle with powerful, long-duration protections and follows up with relentless, precision fire from his Archery suite.

With Energy Mastery reinforcing his staying power and burst, Gyr Falcon is built for sharp, repeatable alpha strikes and team durability — a Defender who sets the team up, then keeps the pressure on.

Build Identity:

  • Primary: Cold Domination – Powerful defense/resist shields, stealth auras, terrain debuffs, and regen suppression

  • Secondary: Archery – Long-range, fast-recharging attacks with wide cones and bonus accuracy

  • Epic Pool: Energy Mastery – Self-sustain, resistance toggle, and high-burst melee finisher

Gyr Falcon’s Tactical Strengths:

  • Pre-battle shielding lets the team open safely and strike confidently

  • Precision strikes from flight — leveraging Archery’s range and speed without needing to reposition

  • Power Build Up + Total Focus allows for devastating crit-burst moments when needed

  • Free to focus on offense once shields and auras are deployed — little need for mid-fight upkeep

  • Temp Invulnerability and Force of Nature provide emergency toughness in high-stakes pulls

Gyr Falcon is front-loaded, fast, and focused — designed to make his team better right out of the gate, then blast safely from above. He’s not built to micromanage the field — he lets Cold do the lifting, and Archery do the cleaning.

Watch for how Gyr Falcon leverages early shielding to stay mobile and maximize DPS while staying out of harm’s way.

 

Mids Build: Gyr Falcon.mbd

 

Build Discussion and Solo Strategy Video:

 

 

 

 

AE 801 Incarnate Team Trials Video!

 

COMING SOON (TM)!

  • Like 1
Posted

Obviously the player behind it is going to drive the performance more than the AT / powerset combo, but, all of that being equal, I would pick a Defender over a Controller as a teammate 10 times out of 10.

Posted

I have my own Ice/Cold/Ice Controller (Jacke Canada) and I love the combination.  Still working on what Cold Domination Defender to build.  It won't be Archery, because on Defenders and Corruptors I only pair Archery with Trick Arrow (just the way I am).

 

And I'm a long time member of Repeat Offenders.  Any Toon can add to any Team outside of the bleeding edge of difficult content.  It is more how well the Toon is built and played.

 

 

  • Like 1

Remember!  Let's be careful out there!   SAFETY NOTE:  If Leader not on Map holding the Mission  Door, First Toon through the Mission Door will set Notoriety.  Hold until Leader on the Map!

City Global @Jacke, @Jacke2 || Discord @jacke4913  

@TheUnnamedOne's BadgeReporter Popmenu

Commands Popmenu including Long Range Teleport Available Zones

Finding Your City Install Root on Windows for HC Launcher, Tequila, Island Rum  

Posted
4 minutes ago, Jacke said:

I have my own Ice/Cold/Ice Controller (Jacke Canada) and I love the combination.  Still working on what Cold Domination Defender to build.  It won't be Archery, because on Defenders and Corruptors I only pair Archery with Trick Arrow (just the way I am).

 

And I'm a long time member of Repeat Offenders.  Any Toon can add to any Team outside of the bleeding edge of difficult content.  It is more how well the Toon is built and played.

 

 

 

I love the Repeat Offenders concept. I ran Defenders of the Night SG back on live and we did all-defender stuff all the time until CoV came out.

  • Like 1
Posted
1 hour ago, arcane said:

Obviously the player behind it is going to drive the performance more than the AT / powerset combo, but, all of that being equal, I would pick a Defender over a Controller as a teammate 10 times out of 10.

 

Well I aim to find out if it's the player or the powerset. Last go round my controllers edged my defenders due to their perma pets from what the data indicated. But that was vs 54x8 standard content. What about vs. incarnate level content? Do those defender higher buff numbers matter or not?

Posted (edited)
1 hour ago, Dark Current said:

Do those defender higher buff numbers matter or not?

 

Depending on the powerset and team composition, sure.

 

Sonic Resonance will pull much more weight with bigger base scalars, but Kinetics is going to be capping the team's damage regardless of whether they're a Defender or a Controller or a Mastermind.

 

However the fact that Defenders (and Corruptors) get a damage orientated blast set with the potential for highly procable blasts and AoEs matters more. Min-maxed they are runaway winners outside of specific edge cases such as a Procbombed Arsenal Control in an AE farm or a Perma PA Illusion Controller vs a pylon.

Also Controller damage has taken a nosedive recently since Plant Control got smacked with the nerfbat and the introduction of variable recharge AoE controls (it's great for control, but rubbish for proc activation rates).

 

My opinion on the the whole premise is still that Defenders are usually more valuable to an optimized team and Controllers more valuable to an unoptimized one; and that using different Offensive Powersets each time will just skew any attempt at comparison to the point where you might as well be comparing Apples to Cauliflowers. But hey, it's a game. Just as long as you're enjoying the ride... 🎠

 

Edited by Maelwys
Posted (edited)
1 hour ago, Maelwys said:

 

Depending on the powerset and team composition, sure.

 

Sonic Resonance will pull much more weight with bigger base scalars, but Kinetics is going to be capping the team's damage regardless of whether they're a Defender or a Controller or a Mastermind.

 

However the fact that Defenders (and Corruptors) get a damage orientated blast set with the potential for highly procable blasts and AoEs matters more. Min-maxed they are runaway winners outside of specific edge cases such as a Procbombed Arsenal Control in an AE farm or a Perma PA Illusion Controller vs a pylon.

Also Controller damage has taken a nosedive recently since Plant Control got smacked with the nerfbat and the introduction of variable recharge AoE controls (it's great for control, but rubbish for proc activation rates).

 

My opinion on the the whole premise is still that Defenders are usually more valuable to an optimized team and Controllers more valuable to an unoptimized one; and that using different Offensive Powersets each time will just skew any attempt at comparison to the point where you might as well be comparing Apples to Cauliflowers. But hey, it's a game. Just as long as you're enjoying the ride... 🎠

 

 

I’m absolutely enjoying the ride. That said, I want to clarify why I’m intentionally using random offensive powersets and team comps in these Defender vs Controller tests, rather than keeping them fixed.

 

This isn’t apples to cauliflower as you dismiss it as — it’s a Monte Carlo approach, which is a real-world method used in science, finance, and engineering to figure out how things perform under uncertainty. To summarize it, you run the same type of test many times with randomized inputs to see if consistent patterns still emerge.

 

Why? Because if a support set or archetype performs well no matter what kind of team or situation it's in, then we’ve uncovered a generalizable strength, not just a combo that works in one ideal setup. That’s what I’m after.

 

If I used the same blast set or teammates every time as you're suggesting, I'd risk:

  1. Building in a bias toward a specific synergy,

  2. Missing the bigger picture of which support sets hold up across a variety of actual play conditions.

So, the randomness of the tests isn't a flaw — it's the engine of the method. While it makes the results messier, it also makes them more meaningful, because patterns that emerge from the noise are the ones worth trusting.

 

I appreciate your thoughts and the chance to explain the reasoning. I'm happy to debate Defenders vs Controllers on a per-case basis if you'd like, but I’m testing for robustness, not cherry-picked synergy that would come with locking into a specific blast or control set for every combo.

Edited by Dark Current
Posted
21 minutes ago, Dark Current said:

This isn’t apples to cauliflower as you dismiss it as — it’s a Monte Carlo approach, which is a real-world method used in science, finance, and engineering to figure out how things perform under uncertainty. To summarize it, you run the same type of test many times with randomized inputs to see if consistent patterns still emerge.

 

I agree that the concept behind that approach (performing a very large number of tests, with whatever variables you don't care about effectively being "randomised" in an attempt to average any disparity out) is indeed potentially sound.

 

But it only holds up if you can perform a sufficiently high number of tests that the results start to stabilize. The more tests the better, obviously, but I think it's fair to say that picking just 5 possible powerset combinations out of a possible 204 (Controller) and 255 (Defender) is hardly exhaustive. And whilst testing a larger number of those possible combinations might begin to reduce the margin of error to more acceptable levels... that doesn't factor in all the possible Epic/Patron/Pool power combinations, let alone Incarnate ability selection, individual power picks or enhancement slotting choices. The number of potential variables in play is simply too large for this to be a feasible testing methodology.

 

If instead the variables were kept as static as possible (e.g. working out what the most average/median offensive Defender and Controller powersets are, then using only those sets in each of the tests) then that might allow any performance disparity between the two ATs to be highlighted with a much smaller sample size. But it likely won't be as entertaining to play; and would still result in arguments like "but Dark Blast unfairly favours Defender -ToHit scalars because you end up with more survivability wiggle room which just lets you procbomb everything".

Posted (edited)

@Maelwys I appreciate your detailed response, and I think we're close in thinking actually. Since this project leans on a Monte Carlo-inspired testing style, I wanted to explain a key tool I'm using to evaluate it called Cumulative Average Analysis.

 

Cumulative Average tracks how the average value of a performance metric (like damage per minute) changes as each new mission data is added. It works like this:

  • Trial 1 = just the first result

  • Trial 2 = average of Trials 1 and 2

  • Trial 3 = average of Trials 1, 2, and 3

  • etc. until Trial 25 that is the average of all 25 trials

This creates is a picture of performance over time where you can see if the trend is stabilizing (main goal in Monte Carlo). 

 

Monte Carlo methods need random inputs to stress-test the system. You're not looking to eliminate variability, but to see if a pattern emerges despite it. It's why I’m deliberately introducing variation in teammates, powersets, maps, etc. If an AT performs well consistently through all the noise, that’s a real signal, not a fluke.

 

A cumulative average graph shows whether performance is converging. If a line rises, flattens and holds, that's telling you the performance metric is stable and meaningful across the variables — exactly what I'm looking for to test generalizable performance.

 

DEFCON 5 Sample Results

Here’s what I found when applying this to the DPM data from the 25 Defender missions and corresponding 26 Controller missions:

 

Output image

  • The Controller line rises faster and flattens at a higher average DPM.

  • The Defender line is more erratic and flattens lower.

This suggests that, across randomized team setups and builds, trends that held as more data accumulated:

  • Controllers outperform Defenders in average damage per minute.

  • The Controller curve rises faster and flattens higher stronger and more consistent offensive performance.

  • The Defender line is more erratic, starting lower and stabilizing at a lower average, which suggests greater variability in support synergy or damage contribution.

 

Now, you're absolutely right that I’m not sampling anywhere near the full powerset × slotting × epic × incarnate potential. But the point of these tests isn’t to simulate every combination. It’s to observe whether significant trends emerge from real, in-game randomness.

 

If the 'signal' I measure is strong enough to stabilize over 25 varied conditions, it has value. If it collapses as soon as variables change, it wasn’t stable to begin with.

 

I get the appeal of narrowing things to "median builds," and that kind of reductionist testing has its place — but it would overcontrol the environment IMO, hiding synergy or volatility that emerge in actual game environments.

 

So no, this isn’t meant to be a perfectly controlled lab experiment. It’s more stress testing the 2 classic support ATs in the field, and letting large, messy data reveal patterns. And what the cumulative average graph above tells us is: in this pocket of noise, Controllers outperformed Defenders, and reliably so, in DPM. This is also true for other metrics, which is why I gave them the 'win' in DEFCON 5.

 

The next questions are:
Why? I think it had to do with 'perma' pets.

Is it true on the margin? I suspect defender advantage from their higher buff / debuff numbers wasn't properly tested at the normal game setting of 54x8.

Edited by Dark Current
Posted (edited)

  

12 hours ago, Dark Current said:

@Maelwys I appreciate your detailed response, and I think we're close in thinking actually. Since this project leans on a Monte Carlo-inspired testing style, I wanted to explain a key tool I'm using to evaluate it called Cumulative Average Analysis.

...

I’m deliberately introducing variation in teammates, powersets, maps, etc. If an AT performs well consistently through all the noise, that’s a real signal, not a fluke.

...

Here’s what I found when applying this to the DPM data from the 25 Defender missions and corresponding 26 Controller missions:


I get the testing methodology and I agree that measuring each build across ~30 data points (any less than that and the statistical confidence plummets) will provide some useful points for comparison.


My issue is more one of... how to best put this? "false advertising"? "sensationist overstatement"? "clickbait headlines"? "unrealistic expectations?"

I'm not sure where this and the original Defcon thread fall in/amongst all those terms - because some of them imply an intention to misdirect for the sake of views; and I'm not sure that's what's going on here and I definitely don't want to disparage or belittle the obvious effort that went into it.

However I am more than a little bit concerned that someone might glance at these thread titles, then immediately look at the results and draw sweeping conclusions from them that are beyond the scope of what was tested. I have seen plenty of cases (on these forums, in game, on discord, on reddit, etc etc) where someone has spouted misinformation based on test results that they've taken completely out-of-context. Ston's old Melee Comparison and Tier Listing is a good example of this - if you don't look too closely at the context (e.g. the attack chains and slotting utilised; and what was actually being attacked) you might be forgiven for thinking that it is a straightforward test of what level of damage the powers contained within each offensive melee powerset can deal; with each powerset's performance then ranked to show how they perform in relation to each other. But instead it's a test of specific builds and attack chains; many of which rely heavily on pool and epic powers. That doesn't mean it's not useful data; but it's often misused as ammunition in arguments for just how much set X performs in relation to set Y in a vacuum; typically to help the quoter justify powerset buffs or nerfs. So allowing your audience to easily understanding the scope of what is being tested is important.

The original Defcon thread claims to be attempting to answer the question "who brings more to the team — Defenders or Controllers?" by making "an honest-to-goodness comparison of these two ATs". However in reality what it is actually measuring and recording are multiple data points for a very limited number of specific builds.
Therefore the most that this approach will be capable of showing a reasonable level of statistical confidence in is how THOSE SPECIFIC CHOSEN BUILDS are likely to perform in a team. Whilst you can certainly compare those builds with each other and draw conclusions from it, the number and variety of builds being tested is far too limited to be meaningfully representative of "Defenders" and "Controllers" as a whole - there are simply so many possible build variations that you cannot directly extrapolate from such a limited subset of them to produce a meaningful outcome; at least not whilst maintaining a reasonable level of accuracy and statistical confidence.

So whilst these threads are certainly entertaining (I enjoy the artwork and bios in particular) unfortunately as far as I can tell it's falling well short of its stated goals - because the testing methodology being employed is far too limited to measure "Defenders vs Controllers" with a reasonable level of statistical confidence.


 

12 hours ago, Dark Current said:

Now, you're absolutely right that I’m not sampling anywhere near the full powerset × slotting × epic × incarnate potential. But the point of these tests isn’t to simulate every combination. It’s to observe whether significant trends emerge from real, in-game randomness.

If the 'signal' I measure is strong enough to stabilize over 25 varied conditions, it has value. If it collapses as soon as variables change, it wasn’t stable to begin with.

...

So no, this isn’t meant to be a perfectly controlled lab experiment. It’s more stress testing the 2 classic support ATs in the field, and letting large, messy data reveal patterns. And what the cumulative average graph above tells us is: in this pocket of noise, Controllers outperformed Defenders, and reliably so, in DPM. This is also true for other metrics, which is why I gave them the 'win' in DEFCON 5.


Again, I agree with the first section here - with 25 data points you are indeed very likely going to start to see meaningful trends emerge.

But those trends are only meaningful for each character being tested.

There seems to be a very big assumption going on here that the results for these 10 characters can be extrapolated to provide an accurate indication of how the Defender and Controller ATs will perform in relation to each other; rather than merely an accurate indication of how this particular subset of characters will perform in relation to each other. And that's what I'm taking issue with here - you've looked at 10 out of the possible 459 primary/secondary powerset combinations tested (let alone the potential variation in power selection, enhancement slotting, power pools, epic pools, incarnate choices, etc) which is at best only covering 2.18% of the possible builds. Therefore I do not believe that this experiment allows you to state with any level of confidence that "Controllers outperformed Defenders, and reliably so"... just that "these Controllers outperformed these Defenders, and reliably so".

And lets be clear; I'm not demanding in the slightest that you test all 204 (Controller) and 255 (Defender) powerset combos here. Because (i) that's sheer madness and (ii) doubtless even after that someone else would object because (for example) "your Time Manipulation Controller should have been using both Power Boost and Radial Clarion to boost the effectiveness of Far Sight like a real character would have done..." 🙄.
The sheer enormity of build customisations available in CoX simply doesn't lend itself to trying to model things based on random sampling; at least not without unfeasibly large sample sizes; and different people have very different notions about building characters and pushing min-maxed numbers. One person might go deep into DPS; and another into maximum mitigation; and another might try for both whilst making minimal build concessions - so one person's Controller (or even Mastermind) could easily beat another person's Defender in pure buffing potential. Squeezing maximum performance out of each of my characters is something I personally rather enjoy making a game out of; but lots of other people simply don't care in the slightest - so there are myriad unknown and/or uncontrollable variables that can muddy the waters.


However it's still an entertaining thread with lots of good and useful data, and the results seem perfectly valid for what is actually being tested. So thumbs up 👍
 

Edited by Maelwys
  • Like 1
Posted (edited)
12 hours ago, Dark Current said:

The next questions are:
Why? I think it had to do with 'perma' pets.

Is it true on the margin? I suspect defender advantage from their higher buff / debuff numbers wasn't properly tested at the normal game setting of 54x8.


Defender level buffs when applied to pets are pretty obviously going to beat Controller level ones.
And being able to layer more -resistance and -tohit on enemies is only going to benefit pets.

But Defenders only really get access to Patron Pool Pets (with the exception of stuff like Traps FFG, Marine's Barrier Reef and Fluffy from Dark Miasma IIRC). Controllers get more pets. So whenever there aren't any teammates around they have something to gain the benefit of all their allied buffs and perhaps even tank for them. 

Personal experience with running lots of tests with Masterminds and Crabbermind VEATs has shown me that pets are incredibly efficient whenever it comes to taking down a big single target; especially with lots of buffs and debuffs in play. That's a big part of why Crabberminds were top of the Pylon leaderboards for ages and why /Marine MMs can down a pylon in a mere 10 seconds. However whenever you subject those pets to a real-world mission environment with multiple targets and teammates they can "underperform". And it's not just because you need to expend more effort in keeping them alive - they're slow; they have buggy AI; they tend to get stuck a lot on Geometry; they obstruct teammate vision and movement; etc. etc. And that's on a MM that can issue their pets orders - something which until very recently Controllers were unable to do!

I will say however that whilst my oldest most support-focused Defender (a Sonic/Elec that I've had since issue ~7) did really appreciate gaining access to patron pets back in the day; I've long since stripped them out of their build in favour of min-maxing the toon's own attack chain. And from the Controller side... whilst my two oldest most support-focused ones (an lllusion/Empath which on HC has since been remade as an Illusion/Time; and an Earth/Thermal) are/were both heavily reliant on their T9 pets; neither of them has ever taken a Patron Pet despite being perfectly capable of getting them "perma". So it depends on the pet. IMO the patron ones tend to be a bit underwhelming... in fact the only Controller I have which does take a Patron Pet is an Arsenal/Traps; and that was more because I had plenty of power picks left and already wanted Poisonous Ray.
 

Edited by Maelwys
Posted

@Maelwys

 

Thanks again for the thoughtful response — I genuinely appreciate your concerns around how testing data is presented and interpreted. You’re absolutely right that scope clarity is important, and I want to acknowledge that directly.

 

So to clarify: these forum posts are primarily a data notebook where I post the raw mission outcomes, charts, and performance breakdowns so that anyone who wants to deep dive the numbers can do so without pausing and rewatching the YouTube videos. The nuance, limitations, and intentions of the DEFCON testing are addressed in the accompanying YT builds and tests and analysis vids. You'll typically hear me say things like: “This isn’t all Defenders vs all Controllers" or "I'm not concluding anything here" or "these results are for these specific builds across randomized conditions.”

 

But I do use thread titles like "Defenders vs Controllers" not to overstate the scope, but as shorthand for the test theme or question. The actual conclusions are more modest: "My tested controllers showed stronger and more stable DPM across 25 missions than the Defenders.”

 

I agree that misinterpreting limited data as global truth is a problem, and it's why I'm structuring my presentations this way:

  • YouTube = narrative, nuance, limitations, goals.

  • Forums = data reference, clean reporting, minimal interruptions

As far as sampling and representation go - like you said - testing all primary/secondary combos is sheer madness. But Monte Carlo-style sampling isn’t trying to be exhaust the list. It asks a simpler question: “Do any consistent patterns emerge when we inject randomness into team context?”

 

The answer the data from my tests have provided so far is YES. For these builds, some clear trends did emerge. But I completely agree with your bottom line:

These Controllers outperformed these Defenders in these trials. Not: “Controllers are definitively better than Defenders.”

 

And that's why I'm running the DEFCON II: Aftermath series. It will add an additional 5 defenders and 5 controllers, another 25 missions each, to the mix. It will also increase the challenge level from standard 54x8 to AE 801 Incarnate challenges to really examine support capabilities at the margins.

 

I have no idea what's going to happen, but in my initial run with Shimr, my ice / cold / ice controller, her pet output was reduced by a good 15% from what I was seeing in the DEFCON 5 tests. IF that's real and holds true with the character tests, then that will hurt controllers more than defenders. And IF that happens, the advantage in the support game that the DEFCON controllers had will evaporate. That will help isolate the Defender's superior buff / debuff numbers, which COULD give them the advantage vs incarnate level foes. But we'll see.

 

It's early in the game and I have yet to release the actual Shimr data in context to the bigger picture. There is a lot more there now with Sythlin's DPS tool, so I'm trying to make sense of it before presenting more. But you see it later today or tomorrow and that will give you both a baseline for the DEFCON II: Aftermath series as well as what it looks like in the greater context of the DEFCON 5 series data. As each test result comes in, hypotheses will evolve, but I'll hold off on any more conclusions until all 10 tests are completed and added to the original 10.

 

I really appreciate the discussion. Your critique helps sharpen how I communicate this work. Thanks again.

  • Thumbs Up 1
Posted

A core reason why Pets often underperform is that it's far too easy in a build not to set up Pets--and Pseudopets--not to get to or even close to that necessary final ToHit of 95%.

 

The reason for this is most Pet IO sets don't give the Pets enough Accuracy.

 

Mids Reborn (MRB) also has a massive bug (so big and fundamental it's unfixable):  The vast majority of Pets and Pseudopets don't benefit from the Casting Toon's Global Buffs.  MRB shows them as getting those buffs.  Some Pets (like Trip Mine) don't even benefit from Buff Auras like the Leadership Powers.

 

That means Pets don't benefit from these things on the Casting Toon:

  • Level Shifts (except from the Mastermind Inherent Power Supremacy)
    • +4 Content is +4 to the Pets, while with a T3 or T4 Alpha Boost it's +3 to the Toon
  • Global Accuracy
  • Global ToHit
  • Global anything

Only a small number of Pets (example is Electrical Blast's Voltaic Sentinel) that are an extension of the Caster benefit from the Caster's Global Buffs.

 

To avoid needing to calculate all the time what the Pet or Pseudopet's actual final ToHit is, I use these two Rules in builds:

  • Slot ED-capped (~93% or greater) Accuracy.
  • If critical Pet Powers aren't Autohit or have a Base Accuracy under 1.2, they also need:
    • The Caster to have Tactics (even Mastermind Tactics with just the Base Slot is good enough).
    • A good source of -Def on the targets the Pets are attacking.

 

 

Remember!  Let's be careful out there!   SAFETY NOTE:  If Leader not on Map holding the Mission  Door, First Toon through the Mission Door will set Notoriety.  Hold until Leader on the Map!

City Global @Jacke, @Jacke2 || Discord @jacke4913  

@TheUnnamedOne's BadgeReporter Popmenu

Commands Popmenu including Long Range Teleport Available Zones

Finding Your City Install Root on Windows for HC Launcher, Tequila, Island Rum  

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...