Or, they’re very good at it, and some people just don’t like the results.
The way their surveys work absolutely suffer from response bias; they're getting the opinions of people who actually respond to their surveys, who are generally going to be more dedicated players than the average. That may well skew responses towards homebrew (low-effort games are likely to run published campaigns, not create their own), but it also likely skews responses to caring more about lore rather than less (if you're running a published campaign, you probably don't cross-reference the MM lore because it really doesn't matter).
So surveys are biased towards the people that respond to surveys? What exactly are you expecting them to do about that?
Not to mentions, surveys were never their only source of information on how people play the game and what they care about, especially now.
So surveys are biased towards the people that respond to surveys? What exactly are you expecting them to do about that?
Response bias is a known problem in polling; there are techniques for improving it the results, but they are substantially more expensive and still imperfect.
As far as response bias goes in polling corporations have biases towards people who fill out their surveys. A company is not interested in groups of people that, no matter what they do, will never want their products. The selective response biases are less of an issue when those that are likely to fill it out still accurately represent the group that you are actually interested in. WotC is not interested in what everyone in the world things, they are interested in what do people who buy WotC products think and the people that fill out the polls are going to be closer to that than not. This doesn't mean they are perfect and they realize this. Just look at the last 2 surveys, they completely changed their questionnaire and survey set up, probably to get more accurate results and feedback. They may miss the mark for some, but it is unmistakable that their ultimate goal is to please as many customers as possible. That's how every company functions because pleasing customers means returning customers, and possibly growing customer base, means more sales, means happy share holders means they stay in business and have jobs.
As far as response bias goes in polling corporations have biases towards people who fill out their surveys. A company is not interested in groups of people that, no matter what they do, will never want their products.
The problem isn't that you wind up with people who buy your product. The problem is that you wind up with a subset of your buyers, who are not necessarily representative. This doesn't mean the data is useless... just imperfect.
As far as response bias goes in polling corporations have biases towards people who fill out their surveys. A company is not interested in groups of people that, no matter what they do, will never want their products.
The problem isn't that you wind up with people who buy your product. The problem is that you wind up with a subset of your buyers, who are not necessarily representative. This doesn't mean the data is useless... just imperfect.
All data is imperfect, I doubt that this was the only data point they had. They seemed pretty adamant that it wasn't. Of course that doesn't mean the choice they made was right either. I don't think the method they came up with to come to their conclusions or the reason they did what they did is really relevant in the end. In the end it is either the product appeals to you or it doesn't. But obviously the goal will always be to appeal to the most amount of customers as they can and their isn't much incentive for them to do less than the best that they can. So far I am pretty pleased with 2024 over 2014 even if I still find 2024 isn't perfect. I didn't find 2014 perfect either.
So surveys are biased towards the people that respond to surveys? What exactly are you expecting them to do about that?
Response bias is a known problem in polling; there are techniques for improving it the results, but they are substantially more expensive and still imperfect.
No no, leave those goalposts where they are. You're not alluding to a mere slant in the survey responses, but the notion that huge swaths of the playerbase are not capable of being represented in them at all, despite their polls not excluding anyone or having any real barriers to participation. And you're basing this conclusion on.... absolutely nothing that I can see.
All data is imperfect, I doubt that this was the only data point they had. They seemed pretty adamant that it wasn't. Of course that doesn't mean the choice they made was right either. I don't think the method they came up with to come to their conclusions or the reason they did what they did is really relevant in the end. In the end it is either the product appeals to you or it doesn't. But obviously the goal will always be to appeal to the most amount of customers as they can and their isn't much incentive for them to do less than the best that they can. So far I am pretty pleased with 2024 over 2014 even if I still find 2024 isn't perfect. I didn't find 2014 perfect either.
This too - they have access to other data sources, such as seeing what we use in play, and how many people use printed vs homebrew settings.
No no, leave those goalposts where they are. You're not alluding to a mere slant in the survey responses, but the notion that huge swaths of the playerbase are not capable of being represented in them at all.
There is a huge swath of the playerbase that does not meaningfully interact with D&D media (many don't interact with D&D at all outside of their games). This group is hard to poll (it's hard to even know how many there are, though counting sales of the PHB is at least a semi-plausible proxy).
Or, they’re very good at it, and some people just don’t like the results.
The way their surveys work absolutely suffer from response bias; they're getting the opinions of people who actually respond to their surveys, who are generally going to be more dedicated players than the average. That may well skew responses towards homebrew (low-effort games are likely to run published campaigns, not create their own), but it also likely skews responses to caring more about lore rather than less (if you're running a published campaign, you probably don't cross-reference the MM lore because it really doesn't matter).
So surveys are biased towards the people that respond to surveys? What exactly are you expecting them to do about that?
Not to mentions, surveys were never their only source of information on how people play the game and what they care about, especially now.
Yes, they are. I... don't really care what they do. If they can't account for it...the numbers are crap. If I say that there will be a massive solar flare in 100 days from now, someone points out that my maths is bogus and it's essentially random, the correct response isn't "what do you expect me to do about it? Invent new physics that will let me predict the future?", it's to recognise that my numbers are essentially useless. That there isn't a better way to do it is irrelevant - it's either a useful tool or it's not, and it's not.
60% is about as good a guess as 50%, 40% or even 30%. Or even 70% or 80%. There is a substantial number of homebrew games, and a substantial number of published adventures. Trying to divine via tealeaves the exact ratio is kinda pointless in my opinion.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
I was formerly in research. You do not need to have very many responses to get a representative sample. You can get a representative sample with as few as 100 completed surveys. The smaller the sample, the larger the margin of error. Fortunately, WotC does not have that problem and reliably gets thousands of surveys whenever they put one out. This is representative data and the margin of error would be small. It took my team about two years to gather the number of surveys WotC gets in a couple weeks, and my research was representative of tens of millions of people.
I was formerly in research. You do not need to have very many responses to get a representative sample.
You don't need that many responses to not worry about random error -- given X respondents, your responses will 95% of the time be within 100/sqrt(X)% of the true value -- but number of responses doesn't help at all with non-random bias.
I was formerly in research. You do not need to have very many responses to get a representative sample.
You don't need that many responses to not worry about random error -- given X respondents, your responses will 95% of the time be within 100/sqrt(X)% of the true value -- but number of responses doesn't help at all with non-random bias.
When dealing with population surveys, degrees of generalizability is assumed based on the kind of data and the number of data collected. If you think that those who self-select for surveys inherently play D&D differently than those who do not, you would need to support that statement with data, otherwise you are simply projecting your own bias onto the results to accomplish what seems to be some kind of rhetorical goal. Self-select is a non-random bias, but is only material when that data would be unequal between the two groups.
I was formerly in research. You do not need to have very many responses to get a representative sample. You can get a representative sample with as few as 100 completed surveys. The smaller the sample, the larger the margin of error. Fortunately, WotC does not have that problem and reliably gets thousands of surveys whenever they put one out. This is representative data and the margin of error would be small. It took my team about two years to gather the number of surveys WotC gets in a couple weeks, and my research was representative of tens of millions of people.
You only need small samples if you have a good idea of how things intersect with demographics. For example, if you know that white males tend to make similar voting choices, then you only need to sample enough white males to be confident that you know how white males are likely to vote, and extrapolate. Then repeat for white females, black males, etc. Then from a fairly small sample, you can project how the population is going to vote. That's true because a lot of money, time and effort over the decades has been poured into understanding how all these factors interact and play on each other (so they can adjust for more than just race and sex, but age, location, income bracket and so forth).
And we all know how those polls have never gotten things wrong, right.
If your sample is that biased, you need to understand extremely well how everything interacts in order to extrapolate an accurate picture (and bigger industries with much more money riding on the results being accurate pouring much more resources into constructing better models get things wrong).
Sorry. I'll agree that homebrew makes a substantial proportion of the makeup of games. Telling me that they know that 60% of games are in homebrew settings with error bars small enough to make that number more meaningful than the previous statement is going to be a hard sell.
Rollback Post to RevisionRollBack
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
I was formerly in research. You do not need to have very many responses to get a representative sample. You can get a representative sample with as few as 100 completed surveys. The smaller the sample, the larger the margin of error. Fortunately, WotC does not have that problem and reliably gets thousands of surveys whenever they put one out. This is representative data and the margin of error would be small. It took my team about two years to gather the number of surveys WotC gets in a couple weeks, and my research was representative of tens of millions of people.
You only need small samples if you have a good idea of how things intersect with demographics. For example, if you know that white males tend to make similar voting choices, then you only need to sample enough white males to be confident that you know how white males are likely to vote, and extrapolate. Then repeat for white females, black males, etc. Then from a fairly small sample, you can project how the population is going to vote. That's true because a lot of money, time and effort over the decades has been poured into understanding how all these factors interact and play on each other (so they can adjust for more than just race and sex, but age, location, income bracket and so forth).
And we all know how those polls have never gotten things wrong, right.
If your sample is that biased, you need to understand extremely well how everything interacts in order to extrapolate an accurate picture (and bigger industries with much more money riding on the results being accurate pouring much more resources into constructing better models get things wrong).
Sorry. I'll agree that homebrew makes a substantial proportion of the makeup of games. Telling me that they know that 60% of games are in homebrew settings with error bars small enough to make that number more meaningful than the previous statement is going to be a hard sell.
A very impassioned response. Do you think that you are approaching the data with clarity of thought or lack of your own bias? Because as an observer, that really does not seem to be the case. Why does this matter to you so much?
That's called poisoning the well, I was discussing this in good faith and talking about the problems that are inherent to the claimed numbers, no "passion". Apparently that's not a reasonable assumption for me to have of others. Check my signature. I'm out.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
The same people that are not meanfully interacting with D&D and are not interacting with the surveys are also the same people that are not meaningfully engaged in the lore or settings. They don't care about any of it beyond what the campaign book they are playing tells them. If t6hey did, then they would be meaningfully interacting right?
So if you were a company, which people's opinions do you think matter most, the ones that don't care or the ones that do?
Rollback Post to RevisionRollBack
Mother and Cat Herder. Playing TTRPGs since 1989 (She/Her)
When dealing with population surveys, degrees of generalizability is assumed based on the kind of data and the number of data collected. If you think that those who self-select for surveys inherently play D&D differently than those who do not, you would need to support that statement with data
I'm sorry, that's not how the burden of proof works. A survey is a good indicator of the opinions of the class of people who respond to the survey, but if you want to extrapolate the results beyond that class, you'll need to give reasons for believing that it's valid to do so.
Now, Wizards does have a fairly good data set -- it's a lot better than, say, asking the people you personally know, or a random D&D YouTuber polling their followers, both of which are typical of their critics -- but it's by no means perfect, which is easily demonstrated by their multiple fumbles over the last few years.
None of which means that Wizards is wrong to cut down on lore in the monster manual -- personally, I suspect they're right -- but it's useful to understand the limits of data (my suspicion is that there's a group of people who are running a game with a published module, and are therefore using a published setting... but also don't care about the lore).
When dealing with population surveys, degrees of generalizability is assumed based on the kind of data and the number of data collected. If you think that those who self-select for surveys inherently play D&D differently than those who do not, you would need to support that statement with data
I'm sorry, that's not how the burden of proof works. A survey is a good indicator of the opinions of the class of people who respond to the survey, but if you want to extrapolate the results beyond that class, you'll need to give reasons for believing that it's valid to do so.
Now, Wizards does have a fairly good data set -- it's a lot better than, say, asking the people you personally know, or a random D&D YouTuber polling their followers, both of which are typical of their critics -- but it's by no means perfect, which is easily demonstrated by their multiple fumbles over the last few years.
None of which means that Wizards is wrong to cut down on lore in the monster manual -- personally, I suspect they're right -- but it's useful to understand the limits of data (my suspicion is that there's a group of people who are running a game with a published module, and are therefore using a published setting... but also don't care about the lore).
Surveys are far better at telling someone what people don't like, or what they think they would like better than how to actually produce what they would like in a profitable manner.
It is simply not a given that consumers have reasonable expectations. It is also a simple reality that coming up with great new products consistently is really not simple at all.
That's called poisoning the well, I was discussing this in good faith and talking about the problems that are inherent to the claimed numbers, no "passion". Apparently that's not a reasonable assumption for me to have of others. Check my signature. I'm out.
It is a fair question to ask, in context. You don't have data to support your dismissal of the numbers; it is baseless speculation. One is fair to ask where that is coming from, if not a conclusion drawn from the data itself. This response does little to indicate that I was off the mark. The data isn't biased because you don't like it and these are not political opinion polls that are snapshots of opinions in that moment, which shift dramatically between months of data points. You are right - market researchers do need to understand how this data all interacts, but they are professionals in the field, with much better insight into that data, and you and I are just consumers looking at the drawn conclusions of that research.
When dealing with population surveys, degrees of generalizability is assumed based on the kind of data and the number of data collected. If you think that those who self-select for surveys inherently play D&D differently than those who do not, you would need to support that statement with data
I'm sorry, that's not how the burden of proof works. A survey is a good indicator of the opinions of the class of people who respond to the survey, but if you want to extrapolate the results beyond that class, you'll need to give reasons for believing that it's valid to do so.
Now, Wizards does have a fairly good data set -- it's a lot better than, say, asking the people you personally know, or a random D&D YouTuber polling their followers, both of which are typical of their critics -- but it's by no means perfect, which is easily demonstrated by their multiple fumbles over the last few years.
None of which means that Wizards is wrong to cut down on lore in the monster manual -- personally, I suspect they're right -- but it's useful to understand the limits of data (my suspicion is that there's a group of people who are running a game with a published module, and are therefore using a published setting... but also don't care about the lore).
Actually, that is how burden of proof works. You made the positive claim (or implied one) that there is a meaningful difference in D&D players and how they play between those who choose to voluntarily take surveys and those who do not. The burden of proof, therefore, becomes yours upon making that claim. Back it up with evidence or it will be dismissed.
Further, the statement that 60% of players play in a homebrew setting is not an opinion, it is a conclusion reached based on surveys of behaviors.
No one said their surveys were perfect; they don't need to be. They only need to provide data that allows for approximate conclusions to be drawn. Is it exactly 60%? No, probably not. But it is reasonable to say that it is approximately 60% of players that do X and Y based on their responses. This methodology is good enough to draw conclusions on in population health, economics, epidemiology, and all the social sciences, and it is far far more than what is required to base business decisions on in what content to expand on and what content to cut. When you are looking at populations, you cannot be exact, but you can be accurate and precise, which is what you strive for in research that deals with population data.
So, something to keep in mind is that the market research done is not merely Polls. Polls are often the smallest practical portion of it, and function as a correction and a guideline in reading results of other research. A Poll is a very specific tool -- and there are other ways to get stuff that a layperson would probably think of as a poll, but is not a Poll.
It does depend on the kind of research one does, but market research in general is not overly reliant on Polls (which have a different structure than other research tools, including others that have similar self selection bias). Not to mention that the nature of how the research is conducted affects the manner in it which that Self selection bias is controlled for.
For example, the UA feedbacks are not polls. And while they can be used for market research, they are not in and of themselves market research tools.
Furthermore, Polls (such as political polls) and Polling have different structures and rules depending on how the Poll is going to be used and for what purpose.
Which I mention because a lot of the stuff about data polls being biased and correction algorithms are referencing approaches that do not apply in market research.
There is no doubt in my mind that the team at Wizards paid attention t the decade of videos, blog posts, forum whines, reddit threads, and other things -- no doubt because among the most common oof complaints was the monsters were too weak. They looked at why the monsters are too weak, and realized that part of it was the design approach in 2014, and so they attempted a correction there that also reflects their own creative impulses. They did a great job of it -- the monsters are, overall, now more capable of a consistent output in key metrics for their given CR, regardless of how they are used. They also, in other books, put more emphasis on tactical and strategic play.
Which I mention because this is a thread not about statistics and data modeling, but one about how people feel about the new monster manual.
One of the other common complaints was that people did not want the lore in the core books. That was one of the earliest complaints in the new edition -- 2015 and 2016 saw it frequently. It is something that they receive as consistent feedback -- enough that it has been referenced a few times (such as the source of my percentages). So, they addressed that.
Most of the major issues come from people not liking the way that the designers tackled certain things, because it goes against the grain of what they would have done. I am still super annoyed by the way classes work in 5e, myself, so I get it.
I like the new Monster Manual. I can see why they did things, what they did, and how they did it. I may not like some decisions, but so what? It's done. My not liking them won't change them for anyone else, but I can sure as heck change things for myself (like making all my dragons CR 30 monsters).
To me, that's always been the greatest strength of D&D -- that the DM is expected to change things themselves. Other folks don't see things that way. As was pointed out earlier, some folks want the game to only be played in a certain way, without changes by the DM. But they are always going to be angry because the game changes, "evolves", shifts according to the design lead and the edition focus, and that happens because of that core concept that has existed since the earliest era of a box with three little books in it: the DM is supposed to change things.
Just like I was angry when I saw 3e and said "nah".
Rollback Post to RevisionRollBack
Only a DM since 1980 (3000+ Sessions) / PhD, MS, MA / Mixed, Bi, Trans, Woman / No longer welcome in the US, apparently
Wyrlde: Adventures in the Seven Cities .-=] Lore Book | Patreon | Wyrlde YT [=-. An original Setting for 5e, a whole solar system of adventure. Ongoing updates, exclusies, more. Not Talking About It / Dubbed The Oracle in the Cult of Mythology Nerds
To post a comment, please login or register a new account.
So surveys are biased towards the people that respond to surveys? What exactly are you expecting them to do about that?
Not to mentions, surveys were never their only source of information on how people play the game and what they care about, especially now.
Response bias is a known problem in polling; there are techniques for improving it the results, but they are substantially more expensive and still imperfect.
As far as response bias goes in polling corporations have biases towards people who fill out their surveys. A company is not interested in groups of people that, no matter what they do, will never want their products. The selective response biases are less of an issue when those that are likely to fill it out still accurately represent the group that you are actually interested in. WotC is not interested in what everyone in the world things, they are interested in what do people who buy WotC products think and the people that fill out the polls are going to be closer to that than not. This doesn't mean they are perfect and they realize this. Just look at the last 2 surveys, they completely changed their questionnaire and survey set up, probably to get more accurate results and feedback. They may miss the mark for some, but it is unmistakable that their ultimate goal is to please as many customers as possible. That's how every company functions because pleasing customers means returning customers, and possibly growing customer base, means more sales, means happy share holders means they stay in business and have jobs.
The problem isn't that you wind up with people who buy your product. The problem is that you wind up with a subset of your buyers, who are not necessarily representative. This doesn't mean the data is useless... just imperfect.
All data is imperfect, I doubt that this was the only data point they had. They seemed pretty adamant that it wasn't. Of course that doesn't mean the choice they made was right either. I don't think the method they came up with to come to their conclusions or the reason they did what they did is really relevant in the end. In the end it is either the product appeals to you or it doesn't. But obviously the goal will always be to appeal to the most amount of customers as they can and their isn't much incentive for them to do less than the best that they can. So far I am pretty pleased with 2024 over 2014 even if I still find 2024 isn't perfect. I didn't find 2014 perfect either.
No no, leave those goalposts where they are. You're not alluding to a mere slant in the survey responses, but the notion that huge swaths of the playerbase are not capable of being represented in them at all, despite their polls not excluding anyone or having any real barriers to participation. And you're basing this conclusion on.... absolutely nothing that I can see.
This too - they have access to other data sources, such as seeing what we use in play, and how many people use printed vs homebrew settings.
There is a huge swath of the playerbase that does not meaningfully interact with D&D media (many don't interact with D&D at all outside of their games). This group is hard to poll (it's hard to even know how many there are, though counting sales of the PHB is at least a semi-plausible proxy).
Yes, they are. I... don't really care what they do. If they can't account for it...the numbers are crap. If I say that there will be a massive solar flare in 100 days from now, someone points out that my maths is bogus and it's essentially random, the correct response isn't "what do you expect me to do about it? Invent new physics that will let me predict the future?", it's to recognise that my numbers are essentially useless. That there isn't a better way to do it is irrelevant - it's either a useful tool or it's not, and it's not.
60% is about as good a guess as 50%, 40% or even 30%. Or even 70% or 80%. There is a substantial number of homebrew games, and a substantial number of published adventures. Trying to divine via tealeaves the exact ratio is kinda pointless in my opinion.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
I was formerly in research. You do not need to have very many responses to get a representative sample. You can get a representative sample with as few as 100 completed surveys. The smaller the sample, the larger the margin of error. Fortunately, WotC does not have that problem and reliably gets thousands of surveys whenever they put one out. This is representative data and the margin of error would be small. It took my team about two years to gather the number of surveys WotC gets in a couple weeks, and my research was representative of tens of millions of people.
DM mostly, Player occasionally | Session 0 form | He/Him/They/Them
EXTENDED SIGNATURE!
Doctor/Published Scholar/Science and Healthcare Advocate/Critter/Trekkie/Gandalf with a Glock
Try DDB free: Free Rules (2024), premade PCs, adventures, one shots, encounters, SC, homebrew, more
Answers: physical books, purchases, and subbing.
Check out my life-changing
You don't need that many responses to not worry about random error -- given X respondents, your responses will 95% of the time be within 100/sqrt(X)% of the true value -- but number of responses doesn't help at all with non-random bias.
When dealing with population surveys, degrees of generalizability is assumed based on the kind of data and the number of data collected. If you think that those who self-select for surveys inherently play D&D differently than those who do not, you would need to support that statement with data, otherwise you are simply projecting your own bias onto the results to accomplish what seems to be some kind of rhetorical goal. Self-select is a non-random bias, but is only material when that data would be unequal between the two groups.
DM mostly, Player occasionally | Session 0 form | He/Him/They/Them
EXTENDED SIGNATURE!
Doctor/Published Scholar/Science and Healthcare Advocate/Critter/Trekkie/Gandalf with a Glock
Try DDB free: Free Rules (2024), premade PCs, adventures, one shots, encounters, SC, homebrew, more
Answers: physical books, purchases, and subbing.
Check out my life-changing
You only need small samples if you have a good idea of how things intersect with demographics. For example, if you know that white males tend to make similar voting choices, then you only need to sample enough white males to be confident that you know how white males are likely to vote, and extrapolate. Then repeat for white females, black males, etc. Then from a fairly small sample, you can project how the population is going to vote. That's true because a lot of money, time and effort over the decades has been poured into understanding how all these factors interact and play on each other (so they can adjust for more than just race and sex, but age, location, income bracket and so forth).
And we all know how those polls have never gotten things wrong, right.
If your sample is that biased, you need to understand extremely well how everything interacts in order to extrapolate an accurate picture (and bigger industries with much more money riding on the results being accurate pouring much more resources into constructing better models get things wrong).
Sorry. I'll agree that homebrew makes a substantial proportion of the makeup of games. Telling me that they know that 60% of games are in homebrew settings with error bars small enough to make that number more meaningful than the previous statement is going to be a hard sell.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
A very impassioned response. Do you think that you are approaching the data with clarity of thought or lack of your own bias? Because as an observer, that really does not seem to be the case. Why does this matter to you so much?
DM mostly, Player occasionally | Session 0 form | He/Him/They/Them
EXTENDED SIGNATURE!
Doctor/Published Scholar/Science and Healthcare Advocate/Critter/Trekkie/Gandalf with a Glock
Try DDB free: Free Rules (2024), premade PCs, adventures, one shots, encounters, SC, homebrew, more
Answers: physical books, purchases, and subbing.
Check out my life-changing
That's called poisoning the well, I was discussing this in good faith and talking about the problems that are inherent to the claimed numbers, no "passion". Apparently that's not a reasonable assumption for me to have of others. Check my signature. I'm out.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
The same people that are not meanfully interacting with D&D and are not interacting with the surveys are also the same people that are not meaningfully engaged in the lore or settings. They don't care about any of it beyond what the campaign book they are playing tells them. If t6hey did, then they would be meaningfully interacting right?
So if you were a company, which people's opinions do you think matter most, the ones that don't care or the ones that do?
Mother and Cat Herder. Playing TTRPGs since 1989 (She/Her)
I'm sorry, that's not how the burden of proof works. A survey is a good indicator of the opinions of the class of people who respond to the survey, but if you want to extrapolate the results beyond that class, you'll need to give reasons for believing that it's valid to do so.
Now, Wizards does have a fairly good data set -- it's a lot better than, say, asking the people you personally know, or a random D&D YouTuber polling their followers, both of which are typical of their critics -- but it's by no means perfect, which is easily demonstrated by their multiple fumbles over the last few years.
None of which means that Wizards is wrong to cut down on lore in the monster manual -- personally, I suspect they're right -- but it's useful to understand the limits of data (my suspicion is that there's a group of people who are running a game with a published module, and are therefore using a published setting... but also don't care about the lore).
Surveys are far better at telling someone what people don't like, or what they think they would like better than how to actually produce what they would like in a profitable manner.
It is simply not a given that consumers have reasonable expectations. It is also a simple reality that coming up with great new products consistently is really not simple at all.
It is a fair question to ask, in context. You don't have data to support your dismissal of the numbers; it is baseless speculation. One is fair to ask where that is coming from, if not a conclusion drawn from the data itself. This response does little to indicate that I was off the mark. The data isn't biased because you don't like it and these are not political opinion polls that are snapshots of opinions in that moment, which shift dramatically between months of data points. You are right - market researchers do need to understand how this data all interacts, but they are professionals in the field, with much better insight into that data, and you and I are just consumers looking at the drawn conclusions of that research.
DM mostly, Player occasionally | Session 0 form | He/Him/They/Them
EXTENDED SIGNATURE!
Doctor/Published Scholar/Science and Healthcare Advocate/Critter/Trekkie/Gandalf with a Glock
Try DDB free: Free Rules (2024), premade PCs, adventures, one shots, encounters, SC, homebrew, more
Answers: physical books, purchases, and subbing.
Check out my life-changing
Actually, that is how burden of proof works. You made the positive claim (or implied one) that there is a meaningful difference in D&D players and how they play between those who choose to voluntarily take surveys and those who do not. The burden of proof, therefore, becomes yours upon making that claim. Back it up with evidence or it will be dismissed.
Further, the statement that 60% of players play in a homebrew setting is not an opinion, it is a conclusion reached based on surveys of behaviors.
No one said their surveys were perfect; they don't need to be. They only need to provide data that allows for approximate conclusions to be drawn. Is it exactly 60%? No, probably not. But it is reasonable to say that it is approximately 60% of players that do X and Y based on their responses. This methodology is good enough to draw conclusions on in population health, economics, epidemiology, and all the social sciences, and it is far far more than what is required to base business decisions on in what content to expand on and what content to cut. When you are looking at populations, you cannot be exact, but you can be accurate and precise, which is what you strive for in research that deals with population data.
Edit: Edited to be more specific.
DM mostly, Player occasionally | Session 0 form | He/Him/They/Them
EXTENDED SIGNATURE!
Doctor/Published Scholar/Science and Healthcare Advocate/Critter/Trekkie/Gandalf with a Glock
Try DDB free: Free Rules (2024), premade PCs, adventures, one shots, encounters, SC, homebrew, more
Answers: physical books, purchases, and subbing.
Check out my life-changing
So, something to keep in mind is that the market research done is not merely Polls. Polls are often the smallest practical portion of it, and function as a correction and a guideline in reading results of other research. A Poll is a very specific tool -- and there are other ways to get stuff that a layperson would probably think of as a poll, but is not a Poll.
It does depend on the kind of research one does, but market research in general is not overly reliant on Polls (which have a different structure than other research tools, including others that have similar self selection bias). Not to mention that the nature of how the research is conducted affects the manner in it which that Self selection bias is controlled for.
For example, the UA feedbacks are not polls. And while they can be used for market research, they are not in and of themselves market research tools.
Furthermore, Polls (such as political polls) and Polling have different structures and rules depending on how the Poll is going to be used and for what purpose.
Which I mention because a lot of the stuff about data polls being biased and correction algorithms are referencing approaches that do not apply in market research.
There is no doubt in my mind that the team at Wizards paid attention t the decade of videos, blog posts, forum whines, reddit threads, and other things -- no doubt because among the most common oof complaints was the monsters were too weak. They looked at why the monsters are too weak, and realized that part of it was the design approach in 2014, and so they attempted a correction there that also reflects their own creative impulses. They did a great job of it -- the monsters are, overall, now more capable of a consistent output in key metrics for their given CR, regardless of how they are used. They also, in other books, put more emphasis on tactical and strategic play.
Which I mention because this is a thread not about statistics and data modeling, but one about how people feel about the new monster manual.
One of the other common complaints was that people did not want the lore in the core books. That was one of the earliest complaints in the new edition -- 2015 and 2016 saw it frequently. It is something that they receive as consistent feedback -- enough that it has been referenced a few times (such as the source of my percentages). So, they addressed that.
Most of the major issues come from people not liking the way that the designers tackled certain things, because it goes against the grain of what they would have done. I am still super annoyed by the way classes work in 5e, myself, so I get it.
I like the new Monster Manual. I can see why they did things, what they did, and how they did it. I may not like some decisions, but so what? It's done. My not liking them won't change them for anyone else, but I can sure as heck change things for myself (like making all my dragons CR 30 monsters).
To me, that's always been the greatest strength of D&D -- that the DM is expected to change things themselves. Other folks don't see things that way. As was pointed out earlier, some folks want the game to only be played in a certain way, without changes by the DM. But they are always going to be angry because the game changes, "evolves", shifts according to the design lead and the edition focus, and that happens because of that core concept that has existed since the earliest era of a box with three little books in it: the DM is supposed to change things.
Just like I was angry when I saw 3e and said "nah".
Only a DM since 1980 (3000+ Sessions) / PhD, MS, MA / Mixed, Bi, Trans, Woman / No longer welcome in the US, apparently
Wyrlde: Adventures in the Seven Cities
.-=] Lore Book | Patreon | Wyrlde YT [=-.
An original Setting for 5e, a whole solar system of adventure. Ongoing updates, exclusies, more.
Not Talking About It / Dubbed The Oracle in the Cult of Mythology Nerds