I couldn't resist doing some quantification around this.
The setup is for my FM/SD numbers. I ran a monte carlo simulation of kill times with crits modeled as a random function and critical strikes proc modelled as a random function. I did not model build up into this, but just scaled my damage up appropriately. Modeling the proper gaussian's proc chance would increase the variability some since I don't think it's 100% chance on activation.
Note Fiery Melee is about as predictable as you can get for a scrapper set. If you are using a set like titan weapons or anything with a combo mechanic, the irreducible 5% chance to miss is going to increase variance because you can miss your combo/momentum/whatever builder. I don't have that problem with FM, and I didn't feel like modeling it, honestly.
Anyway, to the graph.
The y axis is the cumulative percentile of the distribution. The x-axis is the time in seconds to kill the pylon. The 80% confidence interval ranged from 160 seconds to 190 seconds. The translates in dps from 330-370 dps. Note that the DPS curve is not linear in time, and when you start dropping under 150 seconds the curve from seconds to DPS gets very steep (larger change in DPS number for smaller changes in seconds to kill).
The blue curve is the theoretical pylon kill time in seconds for the result of 10,000 trials on a MC simulation. The orange line is the mean of that distribution.
For fun I added the range of means you can expect doing a 5 trial sample (grey curve). It improves your results relative to a 1 trial sample (blue curve), but still has a decently wide distribution.
I won't weigh in much more on this other than to say my position about drawing conclusions from a single number is probably clear if I bothered to go to this effort. I realize the irony that I have posted a single number in this very thread.
One edit: I assumed this was a two roll system for hit and crit like XCOM 1, and not a single roll system like XCOM 2. A single roll system would reduce variance.