You are now pulling a bait and switch.
When I presented aggregate MTGO daily data, it's a mere "sample," explicitly calling it a questionably reliable data source.
I've pulled no bait and switch. I've consistently said, the data set is a sample. The statistics that I read Danny drew his conclusion on are the same. I read it that he was extrapolating the data set to describe the metagame. When you do this, I find it reasonable to factor in variance to the data set, which over a quarter of the year can result in a significant number of decks.
If Danny and I are both using the exact same data source: MTGO daily events, why are you crediting his data, but ignoring mine?
Moreover, if you are so concerned about sampling and time periods, shouldn't you be much more critical of his data set? If this isn't about just Danny's data, then doesn't his claim become even more tenuous?
Again, according the data I presented in the first post in this thread, in Q1:
Shops are 30% of dailies
Gush is 20.7%
And Mentor is 10%
Danny's explanation for why his data set is different, and he calculates Shops at 22% and Mentor at 16%, is that he added October, November, and December to the data. That means that Danny had the exact same data I have, but added three more months to the beginning (and therefore less directly relevant in determining current trends).
Adding those three months bolsters any claim that Mentor is getting closer to Shops, but looking just at Q1 makes it clear that Shops have pulled far ahead. According to the data, Shops went from 14% of dailies in Q4 to 30% in Q1. Danny knows this or can know this, since his data set encompassed both quarters. By presenting both Q4 and Q1, he is selectively ignoring the "variance" that illustrates a huge increase in Shops in the MTGO data set.
Look. I took a single sentence out of this article that I found to be objectionable based upon all of the available evidence. My data strongly disputed his claim.
I disagree that the data strongly disputes his claim. It can be read that it disputes it, but if you look more broadly, his statement is not as black-and-white true-or-false as you make it seem.
OK, let's look at his statement more broadly. Here is the the sentence and the sentence that precedes it:
"One could make a very strong argument that Monastery Mentor decks were the best deck in the format prior to the April 4th changes. They occupied basically the same percentage of the metagame as all of the Mishra’s Workshop decks combined, and unlike its artifact based counterpart there is no real good way to combat it. "
So, he's implicitly arguing that Mentor was the best deck before the most recent restriction, or at a minimum, tentatively endorsing such an argument. And then he is presenting a statistical claim to support that argument. So, for the argument that he is either advancing or implicitly endorsing to be true, the facts upon which it relies must also be true. This is not an editorialization. This is claim. If my staff put such a claim in a report, article or brief, I would demand they support it.
Mentor is a heavily played Gush deck - one of the best/most played versions, in fact, in paper it out-represents Shops. I don't find it offensive for him to say they see *about *the same play.
I don't find it offensive; I find it factually untrue.
And you are correct, Shops could see heavier play due to variance, but that does not seem as plausible based on the larger tournament results
Really? If you believe that, then you are ignoring the facts. If we look at the larger tournaments, Shops performs just as well, if not better. See below.
Recall again that Danny's data includes Q4, whereas mine is just Q1. If Danny's data is accurate, that Q4 and Q1 combine to make Shops only 22% of the MTGO daily results, and Q1 data has Shops at 30%, then for Danny's data to be true, Shops must have been around 14% in Q4 in order for that to average out.
That means that Shops more than doubled between Q4 and Q1. So, if we are going to look at "larger" tournament results and more tournaments, and we really care about trends, the trend is clear: Shops had a dramatic increase in Q1.
After all, you are now arguing that what we should care about is trends. The variance argument actually plays into my critique. Shops trended dramatically upwards in Q1, and any variance over time is variance that should be interpreted, if we credit trend data, towards Shops increasing frequency.
The difference between 16 and 22 % of a Vintage metagame is actually an enormous gulf. In fact, it's probably much larger than "I would have us believe." Consider a few facts that will put this in context:
Again, you want it to be true for some and not others. 16 and 22% is a huge gulf, but not spread over 48 events each reporting 4-7 decklists. The application means that you might see 1 more Shops deck than a Mentor deck at a similar event. This is why I don't understand why his statement was so offensive. The difference in play is less than 1 deck per event over nearly 50 events... It's only when you turn it into aggregate data that it turns into an 'enormous gulf'. But that's not what your statistics describe - the statistics describe a small event - where they were generated.
Your argument is tantamount to an argument against aggregation. It's absurd on it's own terms, but taking it at face value, my point is true whether we look at dailies or Premier events.
Let's look at Premier events. In Q1, there were only 8 Mentor decks in the MTGO P9 Top 16s. That's for an overall percentage of 16.66% of decks. In contrast, Shops were 31% of those decks.
So, if we look at Q1, using a smaller data of 3 events with 16 decklists per event with much less bias for particular player overrepresentation, then how preposterous is it to claim that 16.66% is about the same as 31%?
Exactly. Your point here is a straw man argument, that actually undermines any use of data.
Which is what I point out. Data has to be used in context and now you flip/flop from the use of data to prove fallacy to 'looking for trends'. That's what I've advocated all along. But to look at trends, you have to look at how the data is generated. Following top-level Shops mages on MTGO is going to artificially skew the data, but you don't seem willing to admit that the data does that.
While I'm glad we are now on the same page regarding "looking at trends," if you go back and look at any article I've ever published on Metagame Analysis that I've linked to earlier, it's clear that's the entire goal.
The point of looking at aggregate data is to discern trends. That's not flipflopping, that's the essence of what analyzing the metagame is for.
If MTGO dailies were as skewed as you suggest, then the data for Q1 dailies and MTGO P9 challenges wouldn't be virtually the same. Yet, the top 16 and top 8 data from the premier events is almost statistically identical. So, this "skew" effect that you keep harping on in an attempt to undermine the validity of the dailies - is not evident if we compared the dailies to the premiers. It's the same stats. The same numbers.
Again, I'll agree to disagree. I think he made an overstatement, but I don't find it offensive. I don't think you could distinguish 50 decks from 72 decks using sound statistical tools to analyze your data set to describe the metagame as a whole over Q1. I think it becomes even more difficult if the data analysis included an analysis based on pilot. If it repeated over another quarter (i.e. a trend), I think you would have a statistical argument, but we'll never know.
It's not 50. It's 28 compared to 72. That's the number of Mentor decks in the data set compare to the number of Shop decks.
I was using the 50 Gush decks to illustrate that not even the total number of Gush decks comes close to the number of Shop decks, and only 23 of the 50 Gush decks were Mentor decks.
To accept your overall argument concerning Danny's claim, one would have to believe that 28 is "reasonably" close enough to 72, given variance. That's just nonsense. It's miles away, not inches.