Is OneD&D meant to push boundaries and grow the game's capabilities or to eliminate the barriers to adoption and usability?
Neither, it is meant to address known issues and common complaints from DMs & players alike (e.g. PAM+Sentinel, Druid complexity, monk & ranger badness), while maintaining / promoting adoptability and ease of use by beginners.
Not really. And people didn't want to have to buy brand all brand new books.
Don't you think this is bound to happen sooner or later ? Do you expect dnd 5. "x" for eons to last ?
Sorry for being so unconstructive, I expected more radical changes on the UA series.
Eventually sure. But there's still milk left in this cow. WOTC thinks they have a good thing here, and there is zero chance that the accountants are going to let them risk the gravy train until there's no other alternative.
Rollback Post to RevisionRollBack
Any time an unfathomably powerful entity sweeps in and offers godlike rewards in return for just a few teensy favors, it’s a scam. Unless it’s me. I’d never lie to you, reader dearest.
Is OneD&D meant to push boundaries and grow the game's capabilities or to eliminate the barriers to adoption and usability?
None of the above really. It's supposed to fix some basic, long standing issues. Whip the lumps out of the mashed potatoes so to speak.
Rollback Post to RevisionRollBack
Any time an unfathomably powerful entity sweeps in and offers godlike rewards in return for just a few teensy favors, it’s a scam. Unless it’s me. I’d never lie to you, reader dearest.
Is OneD&D meant to push boundaries and grow the game's capabilities or to eliminate the barriers to adoption and usability?
I think there may have been a window for more radical changes earlier in the process, but the playtest votes kept going against bigger things. I bet if the community has been more into big new stuff, we would have gotten that. Im not trying to say that as a judgement for or against big new stuff. Just my take on why it’s much more incremental than radical. In addition to the above posters who correctly say the current version is too popular right now and they don’t want to mess with it.
Is OneD&D meant to push boundaries and grow the game's capabilities or to eliminate the barriers to adoption and usability?
I think there may have been a window for more radical changes earlier in the process, but the playtest votes kept going against bigger things. I bet if the community has been more into big new stuff, we would have gotten that.
It might have helped if the "big new stuff" had actually been properly thought through before they released it to UA; it's not surprising they got negative reactions to early Druid and Warlock considering they made them both terrible; while simplifying Druid makes sense, nobody wanted it to be infinitely less useful and also just worse all around, while the Warlock they made a half-caster without another half, and invocations/mystic arcanum/pacts were just a complete mess.
Negative feedback should not have been unexpected, and the survey format they use does not lend itself to "I like the idea but…" feedback, you either completely love something or completely hate it in a single option. If they wanted to know what principles players supported then they were asking exactly the wrong questions.
Former D&D Beyond Customer of six years: With the axing of piecemeal purchasing, lack of meaningful development, and toxic moderation the site isn't worth paying for anymore. I remain a free user only until my groups are done migrating from DDB, and if necessary D&D, after which I'm done. There are better systems owned by better companies out there.
I have unsubscribed from all topics and will not reply to messages. My homebrew is now 100% unsupported.
Except the surveys did have sections to write in commentaries, and frankly the kind of qualitative answers you’re saying they should have polled for more heavily aren’t particularly useful to a large-scale survey. They need quantitative data points they can translate into trends and analyze first. They were not looking for suggestions on what changes could be made going forward nearly as much as they simply wanted to know how positively the changes they introduced were viewed.
Is OneD&D meant to push boundaries and grow the game's capabilities or to eliminate the barriers to adoption and usability?
I think there may have been a window for more radical changes earlier in the process, but the playtest votes kept going against bigger things. I bet if the community has been more into big new stuff, we would have gotten that. Im not trying to say that as a judgement for or against big new stuff. Just my take on why it’s much more incremental than radical. In addition to the above posters who correctly say the current version is too popular right now and they don’t want to mess with it.
I disagree there, considering how some changed - like flexible casting stat for Warlock, seemed to have pretty high support yet was still abandoned. I suspect the more radical changes like the half-caster warlock, flexible casting stat, and choose-your-spell-list bard were more experiments to see where they might want to go for 6e, rather than serious considerations for the 5.5e Revision.
Except the surveys did have sections to write in commentaries, and frankly the kind of qualitative answers you’re saying they should have polled for more heavily aren’t particularly useful to a large-scale survey. They need quantitative data points they can translate into trends and analyze first. They were not looking for suggestions on what changes could be made going forward nearly as much as they simply wanted to know how positively the changes they introduced were viewed.
Which is precisely why rating every question on "satisfaction" makes the surveys of dubious value; satisfaction is highly subjective and extremely reductive. I can be highly dissatisfied with a feature but see what they were going for; the text field is useless for nuance because they've tossed anything that scored low, apparently regardless of what people said (as they don't even mention that).
A much more useful scale would have been "works well -> needs work -> redesign" or something along those lines, i.e- actually give us a way to indicate where we'd like to see improvements, so that features getting a lot of "needs work" results are able to come back with improvements rather than being axed outright. It would be better to break them down into further questions, but if you're going to go simplistic then three distinct options is better than basically having only two you've got to guess at the actual meaning of.
I highly doubt that much attention was paid to the text comments, and that's understandable given the volume of responses and the amount of work that would be required; I expect it's more likely that for any results they wanted to dig down into more they probably just took a random sampling of text responses to see what people were saying alongside their very/somewhat satisfied/dissatisfied, either that or they were feeding into some kind of big data algorithms to spit out common phrases, tone etc. But it's very likely text responses were simply ignored for anything with a very high or very low satisfaction rating as they seemed very focused on the scores.
Point being, there are better ways to ask the questions and get more useful responses; satisfied vs. dissatisfied only tells you people either loved or hated a feature as it was presented, but not why, or if they want it to axed or improved etc. It's a very poor metric to base actual decisions on.
Again, I was very much in favour of the idea of converting wild shape to templates; but I hated how weak it was, and how much utility was lost, which is a position that satisfied vs. dissatisfied is absolutely useless for. My response had to be dissatisfied (because I was, and didn't want them to leave it as-is), even though I'd have preferred for them to come back with an improved attempt, i.e- add options onto the templates to cover more utility features, maybe combine with summon beast for simplicity, and bring back some of the durability. "Dissatisfied" was able to convey none of that, and I've no confidence my text comments were read.
Former D&D Beyond Customer of six years: With the axing of piecemeal purchasing, lack of meaningful development, and toxic moderation the site isn't worth paying for anymore. I remain a free user only until my groups are done migrating from DDB, and if necessary D&D, after which I'm done. There are better systems owned by better companies out there.
I have unsubscribed from all topics and will not reply to messages. My homebrew is now 100% unsupported.
How exactly does a 5 tier satisfaction system not communicate the same basic message of “it’s good as-is -> it could be better -> I wouldn’t accept it as-is”?
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes. Asking for suggestions would be less constructive, not more, because the responses would be much more disparate and harder to parse into useful analytical data.
Except the surveys did have sections to write in commentaries, and frankly the kind of qualitative answers you’re saying they should have polled for more heavily aren’t particularly useful to a large-scale survey. They need quantitative data points they can translate into trends and analyze first. They were not looking for suggestions on what changes could be made going forward nearly as much as they simply wanted to know how positively the changes they introduced were viewed.
Which is precisely why rating every question on "satisfaction" makes the surveys of dubious value; satisfaction is highly subjective and extremely reductive. I can be highly dissatisfied with a feature but see what they were going for; the text field is useless for nuance because they've tossed anything that scored low, apparently regardless of what people said (as they don't even mention that).
A much more useful scale would have been "works well -> needs work -> redesign" or something along those lines, i.e- actually give us a way to indicate where we'd like to see improvements, so that features getting a lot of "needs work" results are able to come back with improvements rather than being axed outright. It would be better to break them down into further questions, but if you're going to go simplistic then three distinct options is better than basically having only two you've got to guess at the actual meaning of.
I highly doubt that much attention was paid to the text comments, and that's understandable given the volume of responses and the amount of work that would be required; I expect it's more likely that for any results they wanted to dig down into more they probably just took a random sampling of text responses to see what people were saying alongside their very/somewhat satisfied/dissatisfied, either that or they were feeding into some kind of big data algorithms to spit out common phrases, tone etc. But it's very likely text responses were simply ignored for anything with a very high or very low satisfaction rating as they seemed very focused on the scores.
Point being, there are better ways to ask the questions and get more useful responses; satisfied vs. dissatisfied only tells you people either loved or hated a feature as it was presented, but not why, or if they want it to axed or improved etc. It's a very poor metric to base actual decisions on.
Again, I was very much in favour of the idea of converting wild shape to templates; but I hated how weak it was, and how much utility was lost, which is a position that satisfied vs. dissatisfied is absolutely useless for. My response had to be dissatisfied (because I was, and didn't want them to leave it as-is), even though I'd have preferred for them to come back with an improved attempt, i.e- add options onto the templates to cover more utility features, maybe combine with summon beast for simplicity, and bring back some of the durability. "Dissatisfied" was able to convey none of that, and I've no confidence my text comments were read.
Well, they have said they read every comment. It’s certainly your prerogative not to believe them, but they said it.
I think some people (and I don’t mean you in particular) also had different expectations for what a playtest is. Or at least what this playtest is. They weren’t looking to solicit advice from a bunch of armchair game designers. They weren’t looking to crowdsource the four elements monk. They just wanted to know if we thought an idea was fun. Not how we out here on the peanut gallery might fix it, not it would be fun if you changed that d4 to a d6, just is it fun. The player base is good for identifying problems, but fixing them is outside of our job description.
How exactly does a 5 tier satisfaction system not communicate the same basic message of “it’s good as-is -> it could be better -> I wouldn’t accept it as-is”?
Because that's not at all what they gave us; very satisfied, satisfied, neutral, dissatisfied, and very dissatisfied do not convey those concepts; if I dislike the specific implementation of a feature but generally support the idea they were going for which option do you imagine says that?
If I put dissatisfied (as I'd be inclined to), that doesn't tell them why I didn't like it, that doesn't tell them I want to see the idea come back with improvements, that answer is in fact indistinguishable from someone who simply didn't like it, but didn't feel strongly enough to put "very dissatisfied".
The difference is that instead of users having options to give clear, unambiguous answers, all interpretation of the scores is done after the fact, and may bear no relation at all to what respondents actually intended when they answered. Nuance doesn't work when it gets boiled down to pass/fail.
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes. Asking for suggestions would be less constructive, not more, because the responses would be much more disparate and harder to parse into useful analytical data.
Am I accidentally posting in the wrong language or something? I'm getting real tired of being accused of not understanding what the surveys were for; while it'd be nice if they take suggestions on board, I am very specifically posting about the lack of ability to provide a clear "bring it back with improvements" option in the surveys, which led to potentially good changes that were implemented badly being dropped.
And I'd appreciate no further attempts at a straw-man accusing me of being too ******* stupid to understand a survey, thank you very much.
We've seen first hand how they immediately axed anything with a low score; which makes it very clear they can't have been reading all of the textual feedback as there's simply no way they could have done (and they even talked about having not done it yet, as the Druid response was extremely quick), meanwhile we know they aren't viewing the scores as anything other than a binary pass/fail result because they've talked about doing exactly that in multiple videos, multiple times involving multiple features that they axed.
Former D&D Beyond Customer of six years: With the axing of piecemeal purchasing, lack of meaningful development, and toxic moderation the site isn't worth paying for anymore. I remain a free user only until my groups are done migrating from DDB, and if necessary D&D, after which I'm done. There are better systems owned by better companies out there.
I have unsubscribed from all topics and will not reply to messages. My homebrew is now 100% unsupported.
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Which is what the middle rating is for, assuming the designers are looking for that kind of feedback. They aren’t trying to design the changes by committee on the scale of the entire community, they’re just looking for how positively or negatively the changes were received. Honestly, the biggest problem with this approach just seems to be that it can cause people to overestimate the amount of feedback that is actually desired. This is a major company and polling consumers is a major field of market research. Do you really think they structured the surveys as they did because they were just phoning them in and asked some guy with a high school understanding of data collection and analysis to get them some kind of numbers, or do you think they hired actual professionals and explained what kind of data they were looking for?
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Should they, though? Designers have a finite amount of time and energy they can spend. If option A has 65% approval, and option B has 75%, why spend time fiddling with A in the hopes of raising to 80? They have however many other projects to work on and hard deadlines. Sure, with unlimited resources, they can try and polish up a diamond on the rough idea. But since they don’t have unlimited resources, they have to work with the most realistic options for making most of the people happy most of the time.
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Which is what the middle rating is for, assuming the designers are looking for that kind of feedback.
Really? To be honest, I haven't done all of the surveys, but to my recollection it doesn't go "Very Satisfied, Satisfied, I Enjoy the Underlying Concept but the Method by Which It Was Implemented Leaves Much to be Desired, Dissatisfied, Very Dissatisfied." I think I probably would have noticed that.
Rollback Post to RevisionRollBack
Look at what you've done. You spoiled it. You have nobody to blame but yourself. Go sit and think about your actions.
Don't be mean. Rudeness is a vicious cycle, and it has to stop somewhere. Exceptions for things that are funny. Go to the current Competition of the Finest 'Brews! It's a cool place where cool people make cool things.
How I'm posting based on text formatting: Mod Hat Off - Mod Hat Also Off (I'm not a mod)
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Which is what the middle rating is for, assuming the designers are looking for that kind of feedback.
Really? To be honest, I haven't done all of the surveys, but to my recollection it doesn't go "Very Satisfied, Satisfied, I Enjoy the Underlying Concept but the Method by Which It Was Implemented Leaves Much to be Desired, Dissatisfied, Very Dissatisfied." I think I probably would have noticed that.
It does not literally spell it out, but what is a neutral option for if not “I don’t hate the idea, but I think it could be much better”?
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Which is what the middle rating is for, assuming the designers are looking for that kind of feedback. They aren’t trying to design the changes by committee on the scale of the entire community, they’re just looking for how positively or negatively the changes were received. Honestly, the biggest problem with this approach just seems to be that it can cause people to overestimate the amount of feedback that is actually desired. This is a major company and polling consumers is a major field of market research. Do you really think they structured the surveys as they did because they were just phoning them in and asked some guy with a high school understanding of data collection and analysis to get them some kind of numbers, or do you think they hired actual professionals and explained what kind of data they were looking for?
while I agree on their intent, I think you are overestimating their process or how much the big company factors in to the revision of a dnd product. All indications are that usually their surveys and data analysis are not done with the type of rigor you are talking about. I definitely do not think they hired professionals to help them. They had a couple people reading thousands of text replies. Then in the middle of the last feedback cycle Hasbro fired half their employees, some of which are confirmed to have been involved with onednd. Im not saying they were horrible at gleaning info from surveys, but I doubt professional data scientists would approve their process.
But none of that matters, fact is, as you said, the primary intent was just to get a rough idea of how satisfied people were with specific changes. They relied on comments for the whys, and only considered them if they wanted to. They decided the decision on whether to keep iterating or drop something was going to be on their own considerations, not what the community thought. You can see that in the things that were dropped or kept.
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Should they, though? Designers have a finite amount of time and energy they can spend. If option A has 65% approval, and option B has 75%, why spend time fiddling with A in the hopes of raising to 80? They have however many other projects to work on and hard deadlines. Sure, with unlimited resources, they can try and polish up a diamond on the rough idea. But since they don’t have unlimited resources, they have to work with the most realistic options for making most of the people happy most of the time.
If you're trying to solve a problem, the difference between "right track but needs work" and "wrong track" is important.
Well, they have said they read every comment. It’s certainly your prerogative not to believe them, but they said it.
I think some people (and I don’t mean you in particular) also had different expectations for what a playtest is. Or at least what this playtest is. They weren’t looking to solicit advice from a bunch of armchair game designers. They weren’t looking to crowdsource the four elements monk. They just wanted to know if we thought an idea was fun. Not how we out here on the peanut gallery might fix it, not it would be fun if you changed that d4 to a d6, just is it fun. The player base is good for identifying problems, but fixing them is outside of our job description.
Oh absolutely you only need to look at the HB to see how poorly designed many community ideas are. Re: reading the comments, *Note I have no internal knowledge of WotC or even second hand information, this is just guess work.* considering how we manage the results of surveys with ~60 responses, I'd say what most like happens is that some interns read all the comments and then summarize them into no more than 5 common themes for each open response box, which then goes the their supervisor who summarizes across all the open response boxes into 1-3 general themes for each subclass which are then shared with the design team. If you watch the interviews with Crawford, that feels like the type of information he is getting : i.e. "warlocks need more spellslots", or "I want my animal form to scale so I stay the same animal as a Moon druid", or "Monk needs more attacks at level 11".
If you wrote more than 1 sentence for each point you wanted to get across then you mostly just wasted your own time,
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Which could very easily be the difference between "Dissatisfied" and "Very Dissatisfied".
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Neither, it is meant to address known issues and common complaints from DMs & players alike (e.g. PAM+Sentinel, Druid complexity, monk & ranger badness), while maintaining / promoting adoptability and ease of use by beginners.
Eventually sure. But there's still milk left in this cow. WOTC thinks they have a good thing here, and there is zero chance that the accountants are going to let them risk the gravy train until there's no other alternative.
Any time an unfathomably powerful entity sweeps in and offers godlike rewards in return for just a few teensy favors, it’s a scam. Unless it’s me. I’d never lie to you, reader dearest.
Tasha
None of the above really. It's supposed to fix some basic, long standing issues. Whip the lumps out of the mashed potatoes so to speak.
Any time an unfathomably powerful entity sweeps in and offers godlike rewards in return for just a few teensy favors, it’s a scam. Unless it’s me. I’d never lie to you, reader dearest.
Tasha
I think there may have been a window for more radical changes earlier in the process, but the playtest votes kept going against bigger things. I bet if the community has been more into big new stuff, we would have gotten that.
Im not trying to say that as a judgement for or against big new stuff. Just my take on why it’s much more incremental than radical. In addition to the above posters who correctly say the current version is too popular right now and they don’t want to mess with it.
It might have helped if the "big new stuff" had actually been properly thought through before they released it to UA; it's not surprising they got negative reactions to early Druid and Warlock considering they made them both terrible; while simplifying Druid makes sense, nobody wanted it to be infinitely less useful and also just worse all around, while the Warlock they made a half-caster without another half, and invocations/mystic arcanum/pacts were just a complete mess.
Negative feedback should not have been unexpected, and the survey format they use does not lend itself to "I like the idea but…" feedback, you either completely love something or completely hate it in a single option. If they wanted to know what principles players supported then they were asking exactly the wrong questions.
Former D&D Beyond Customer of six years: With the axing of piecemeal purchasing, lack of meaningful development, and toxic moderation the site isn't worth paying for anymore. I remain a free user only until my groups are done migrating from DDB, and if necessary D&D, after which I'm done. There are better systems owned by better companies out there.
I have unsubscribed from all topics and will not reply to messages. My homebrew is now 100% unsupported.
Except the surveys did have sections to write in commentaries, and frankly the kind of qualitative answers you’re saying they should have polled for more heavily aren’t particularly useful to a large-scale survey. They need quantitative data points they can translate into trends and analyze first. They were not looking for suggestions on what changes could be made going forward nearly as much as they simply wanted to know how positively the changes they introduced were viewed.
I disagree there, considering how some changed - like flexible casting stat for Warlock, seemed to have pretty high support yet was still abandoned. I suspect the more radical changes like the half-caster warlock, flexible casting stat, and choose-your-spell-list bard were more experiments to see where they might want to go for 6e, rather than serious considerations for the 5.5e Revision.
Which is precisely why rating every question on "satisfaction" makes the surveys of dubious value; satisfaction is highly subjective and extremely reductive. I can be highly dissatisfied with a feature but see what they were going for; the text field is useless for nuance because they've tossed anything that scored low, apparently regardless of what people said (as they don't even mention that).
A much more useful scale would have been "works well -> needs work -> redesign" or something along those lines, i.e- actually give us a way to indicate where we'd like to see improvements, so that features getting a lot of "needs work" results are able to come back with improvements rather than being axed outright. It would be better to break them down into further questions, but if you're going to go simplistic then three distinct options is better than basically having only two you've got to guess at the actual meaning of.
I highly doubt that much attention was paid to the text comments, and that's understandable given the volume of responses and the amount of work that would be required; I expect it's more likely that for any results they wanted to dig down into more they probably just took a random sampling of text responses to see what people were saying alongside their very/somewhat satisfied/dissatisfied, either that or they were feeding into some kind of big data algorithms to spit out common phrases, tone etc. But it's very likely text responses were simply ignored for anything with a very high or very low satisfaction rating as they seemed very focused on the scores.
Point being, there are better ways to ask the questions and get more useful responses; satisfied vs. dissatisfied only tells you people either loved or hated a feature as it was presented, but not why, or if they want it to axed or improved etc. It's a very poor metric to base actual decisions on.
Again, I was very much in favour of the idea of converting wild shape to templates; but I hated how weak it was, and how much utility was lost, which is a position that satisfied vs. dissatisfied is absolutely useless for. My response had to be dissatisfied (because I was, and didn't want them to leave it as-is), even though I'd have preferred for them to come back with an improved attempt, i.e- add options onto the templates to cover more utility features, maybe combine with summon beast for simplicity, and bring back some of the durability. "Dissatisfied" was able to convey none of that, and I've no confidence my text comments were read.
Former D&D Beyond Customer of six years: With the axing of piecemeal purchasing, lack of meaningful development, and toxic moderation the site isn't worth paying for anymore. I remain a free user only until my groups are done migrating from DDB, and if necessary D&D, after which I'm done. There are better systems owned by better companies out there.
I have unsubscribed from all topics and will not reply to messages. My homebrew is now 100% unsupported.
How exactly does a 5 tier satisfaction system not communicate the same basic message of “it’s good as-is -> it could be better -> I wouldn’t accept it as-is”?
And, again, you’re missing the point: they were not polling the community for suggestions on how to fix things, they were polling for whether or not people liked their changes. Asking for suggestions would be less constructive, not more, because the responses would be much more disparate and harder to parse into useful analytical data.
Well, they have said they read every comment. It’s certainly your prerogative not to believe them, but they said it.
I think some people (and I don’t mean you in particular) also had different expectations for what a playtest is. Or at least what this playtest is. They weren’t looking to solicit advice from a bunch of armchair game designers. They weren’t looking to crowdsource the four elements monk. They just wanted to know if we thought an idea was fun. Not how we out here on the peanut gallery might fix it, not it would be fun if you changed that d4 to a d6, just is it fun. The player base is good for identifying problems, but fixing them is outside of our job description.
Because that's not at all what they gave us; very satisfied, satisfied, neutral, dissatisfied, and very dissatisfied do not convey those concepts; if I dislike the specific implementation of a feature but generally support the idea they were going for which option do you imagine says that?
If I put dissatisfied (as I'd be inclined to), that doesn't tell them why I didn't like it, that doesn't tell them I want to see the idea come back with improvements, that answer is in fact indistinguishable from someone who simply didn't like it, but didn't feel strongly enough to put "very dissatisfied".
The difference is that instead of users having options to give clear, unambiguous answers, all interpretation of the scores is done after the fact, and may bear no relation at all to what respondents actually intended when they answered. Nuance doesn't work when it gets boiled down to pass/fail.
Am I accidentally posting in the wrong language or something? I'm getting real tired of being accused of not understanding what the surveys were for; while it'd be nice if they take suggestions on board, I am very specifically posting about the lack of ability to provide a clear "bring it back with improvements" option in the surveys, which led to potentially good changes that were implemented badly being dropped.
And I'd appreciate no further attempts at a straw-man accusing me of being too ******* stupid to understand a survey, thank you very much.
We've seen first hand how they immediately axed anything with a low score; which makes it very clear they can't have been reading all of the textual feedback as there's simply no way they could have done (and they even talked about having not done it yet, as the Druid response was extremely quick), meanwhile we know they aren't viewing the scores as anything other than a binary pass/fail result because they've talked about doing exactly that in multiple videos, multiple times involving multiple features that they axed.
Former D&D Beyond Customer of six years: With the axing of piecemeal purchasing, lack of meaningful development, and toxic moderation the site isn't worth paying for anymore. I remain a free user only until my groups are done migrating from DDB, and if necessary D&D, after which I'm done. There are better systems owned by better companies out there.
I have unsubscribed from all topics and will not reply to messages. My homebrew is now 100% unsupported.
The point is "I like the concept but the implementation is terrible" and "I hate the concept" are both negative responses, but they should have different effects on the designer.
Which is what the middle rating is for, assuming the designers are looking for that kind of feedback. They aren’t trying to design the changes by committee on the scale of the entire community, they’re just looking for how positively or negatively the changes were received. Honestly, the biggest problem with this approach just seems to be that it can cause people to overestimate the amount of feedback that is actually desired. This is a major company and polling consumers is a major field of market research. Do you really think they structured the surveys as they did because they were just phoning them in and asked some guy with a high school understanding of data collection and analysis to get them some kind of numbers, or do you think they hired actual professionals and explained what kind of data they were looking for?
Should they, though? Designers have a finite amount of time and energy they can spend. If option A has 65% approval, and option B has 75%, why spend time fiddling with A in the hopes of raising to 80? They have however many other projects to work on and hard deadlines. Sure, with unlimited resources, they can try and polish up a diamond on the rough idea. But since they don’t have unlimited resources, they have to work with the most realistic options for making most of the people happy most of the time.
Really? To be honest, I haven't done all of the surveys, but to my recollection it doesn't go "Very Satisfied, Satisfied, I Enjoy the Underlying Concept but the Method by Which It Was Implemented Leaves Much to be Desired, Dissatisfied, Very Dissatisfied." I think I probably would have noticed that.
Look at what you've done. You spoiled it. You have nobody to blame but yourself. Go sit and think about your actions.
Don't be mean. Rudeness is a vicious cycle, and it has to stop somewhere. Exceptions for things that are funny.
Go to the current Competition of the Finest 'Brews! It's a cool place where cool people make cool things.
How I'm posting based on text formatting: Mod Hat Off - Mod Hat Also Off (I'm not a mod)
It does not literally spell it out, but what is a neutral option for if not “I don’t hate the idea, but I think it could be much better”?
while I agree on their intent, I think you are overestimating their process or how much the big company factors in to the revision of a dnd product. All indications are that usually their surveys and data analysis are not done with the type of rigor you are talking about. I definitely do not think they hired professionals to help them. They had a couple people reading thousands of text replies. Then in the middle of the last feedback cycle Hasbro fired half their employees, some of which are confirmed to have been involved with onednd. Im not saying they were horrible at gleaning info from surveys, but I doubt professional data scientists would approve their process.
But none of that matters, fact is, as you said, the primary intent was just to get a rough idea of how satisfied people were with specific changes. They relied on comments for the whys, and only considered them if they wanted to. They decided the decision on whether to keep iterating or drop something was going to be on their own considerations, not what the community thought. You can see that in the things that were dropped or kept.
If you're trying to solve a problem, the difference between "right track but needs work" and "wrong track" is important.
Oh absolutely you only need to look at the HB to see how poorly designed many community ideas are. Re: reading the comments, *Note I have no internal knowledge of WotC or even second hand information, this is just guess work.* considering how we manage the results of surveys with ~60 responses, I'd say what most like happens is that some interns read all the comments and then summarize them into no more than 5 common themes for each open response box, which then goes the their supervisor who summarizes across all the open response boxes into 1-3 general themes for each subclass which are then shared with the design team. If you watch the interviews with Crawford, that feels like the type of information he is getting : i.e. "warlocks need more spellslots", or "I want my animal form to scale so I stay the same animal as a Moon druid", or "Monk needs more attacks at level 11".
If you wrote more than 1 sentence for each point you wanted to get across then you mostly just wasted your own time,
Which could very easily be the difference between "Dissatisfied" and "Very Dissatisfied".