More like "there doesn't seem to be much of an improvement in reasoning for this."
Though yes, that too :) Don't be surprised when you see 0/0 votes on movies.
I voted with the herd (last choice), but I'd like to point out a problem:
With ratings, at least for me, it's hard to be able to give a useful answer on the Technical aspect. While 5 may be average, I don't exactly know how to judge how optimized a movie is. While some mistakes can be obvious, it's hard to see those little, tiny mistakes costing only a few frames, which more experienced people will be able to spot immediately. I may end up voting an unoptimized movie a 9, because while the end result looks very good, there may be many many little mistakes which cost time. How would that get factored in?
adelikat wrote:
It started off fairly tame, but as more balls entered the picture it sure got a lot more entertaining.
I voted with the herd (last choice), but I'd like to point out a problem:
With ratings, at least for me, it's hard to be able to give a useful answer on the Technical aspect (1). While 5 may be average(2), I don't exactly know how to judge how optimized a movie is(3). While some mistakes can be obvious, it's hard to see those little, tiny mistakes costing only a few frames, which more experienced people will be able to spot immediately(4). I may end up voting an unoptimized movie a 9, because while the end result looks very good, there may be many many little mistakes which cost time. How would that get factored in?(5)
(1)It is for everyone. Hopefully the suggestion at the rating thread of giving a clearer definition of the intended purpose if the technical rating will make it a little easier.
(2)I don't know where people get the ridiculous idea that 5 should be the average. Well, actually I do, it's because it says "average" right next to the rating. I think it's very misleading, and a 5 actually is a pretty low score. Some people will vote higher on movies than others, so the averages of people will be different, but that's perfectly fine. 5 should not be, and was never intended to be, the average. (I also suggested getting rid of these labels like "average" at the rating thread.)
(3)No one really knows. It's hard for everyone to determine just by watching the movie. Do note that the "How close is this movie to perfection" was not the intended purpose of the technical rating, and like I said at (1), hopefully this will be fixed some time soon.
(4)I don't think improvements of a few frames can be seen immediately. Experienced people might indeed be able to spot more, but they will either spot clear improvements, or might see something that needs to be tested, which could be a few frames faster. It's most of the time impossible to spot frame improvements right away, without actually testing it.
(5)I don't think anyone can really tell. Everyone can be wrong about something in the end. I personally think it's good to compare a movie with other movies you rated, and rate it in a consistent way with your previous ratings.
No, that was a simplified argument to get rid of your bullshit 'only publish things above 5' concept.
The whole idea that everything we publish is above average negates the concept of an average in the first place. If we are to assume that every run published is above a 5 already, why even give the option to vote less than 5?
This problem can be solved by nonlinear scale. Make 3 the average, 2 "bad", 1 "unassisted player could probably do better", and 0 "unassisted player could definitely do better", or something along those lines. It also touches on the problem of decimals by giving a more precise scale in the part where it's needed.
Warp wrote:
Edit: I think I understand now: It's my avatar, isn't it? It makes me look angry.
Moozooh: We don't need any labels for ratings. People can decide themselves what they think a rating means. The decimal system would solve the problem since the labels will be gone.
I said "we don't need them", as in "no one needs them". So I meant everyone, including the people who might want them. Either way, why do you think they are so useful (especially considering people get misguided by labels like "average")?
I'll tell you a story which may or may not be helpful as an answer to this (and some of the earlier, about rating precision and single criterion) point.
Some time ago, when I was a preteen boy, I was walking with my father's friend, and jumped around. He asked me what height I would be afraid jumping from. I thought a little, and said 2 meters. "So 199 cm would be fine then?" "No… I guess." "198?.." And so on. The interesting thing about that question was that precision was absolutely unneeded in such case because we don't evaluate subjective things like that precisely. Seriously, we don't even care about that kind of precision, and feel differently about it depending on the circumstances. And because of that, any normal person would not be able to tell jumping from 198 and 199 cm apart. They could, however, speculate and justify.
Yes, speculation and justification. The subjective, speculative criteria of appraising movies ("how optimal it is", "how rewatchable it is", etc.) will never be precise enough to warrant small differences (there's no "I will rewatch this movie voluntarily twice, and this one five times"), and there will be many movies you will like differently but won't be able to tell which of them you like more. And some of them, I'm sure, will be fundamentally incomparable to each other.
Naturally, you could go with infinite precision like "I like this run be 10.0, and this one 9.999999999999". You sense something wrong in here? I do, because you essentially liked both the same (or absolutely insignificantly different; to the point of having to forcibly justify this difference to yourself using irrelevant criteria), but you want one of them to end up below the other in your list. Which is the main thing I dislike about your system: the added precision is geared towards a certain end result.
This is somewhat similar to the Last.fm phenomenon, where a lot of (a significant portion at that) people would change their listening habits when the system is gathering statistics, to represent their tastes better by giving certain artists/tracks arbitrary positions on the charts. Which is an end-result oriented approach as well.
The reason I always prefer a ~10-20 step scale is that I can definitely tell the steps apart. The labels serve the same purpose: to tell the steps apart and set referential points for those who are unsure how to formulate their own. It's the same as jumping from the heights of 20, 40, 60, etc., cm, where you can physically feel the difference between each step, and eventually, empirically come to the one you will be afraid jumping from. And if I'm afraid of jumping from 200 cm but not 180, naming an arbitrary number of 187 here won't make anything better, clearer, or more precise. It will tell just about nothing, because it will stay in the grey area.
Hope that was clear.
Warp wrote:
Edit: I think I understand now: It's my avatar, isn't it? It makes me look angry.
It's indeed about significance... and I do think the difference between a 7.8 and 7.9 is significant. I don't see your problem, as you will still be able to vote in as big steps as you want. I don't think the difference between 7.77 and 7.78 is significant, which is why I'm not suggesting that. The reason why I think 0.1 is a significant difference, is since most of the movies tend to end up getting a rating between a 6 and a 9, while there still will be huge differences between them. Rating for published movies, ratings below a 4 are really rare (I think... for most of the voters), so it's not really like you'll have 100 options.
Maybe it also has to do with what grading system is used for instance during school. For me, it was always from 1 to 10, with the accuracy I'm advocating. I've gotten used to grades in that way, and I get a different feeling from seeing a 7.7 and a 7.8. If it's really not significant, then why not round up the averages of the votes to integers also? If you list the TASes by average rating, you see something way more informative than you would see if all averages would be rounded to integers. I view a personal rating list in a similar way.
I think it's time to close this poll. It's clear that mmbossman's suggestion came out as the favourite.
The rest of the debate concerns how things should be rated, and that is parallel discussed also at http://tasvideos.org/forum/t/3859.
For the time being, before a consensus appears at that thread, I think we'll implement the rating mechanism the same way it's currently in the movie system (possibly without the user priority system though); it can be changed later.
I'll implement it as soon as possible, which is hopefully before the next Sabbath :)
I would like to see a box with no whiskers. I think the first and third quartiles show the useful deviation information that can illuminate controversy and spark discussion, but the minimum and maximum would just start arguments.
I think it's time to close this poll. It's clear that mmbossman's suggestion came out as the favourite.
The rest of the debate concerns how things should be rated, and that is parallel discussed also at http://tasvideos.org/forum/t/3859.
For the time being, before a consensus appears at that thread, I think we'll implement the rating mechanism the same way it's currently in the movie system (possibly without the user priority system though); it can be changed later.
I'll implement it as soon as possible, which is hopefully before the next Sabbath :)
This has now been implemented.
Feel free to point out bugs and suggestions.
It is not exactly as mmbossman suggested -- I tool the liberty of reusing some of my old code to make it look cooler and provide slightly more information...
Also I fear that showing the minimum and the maximum ratings might cause some controversy, which was the whole reason why the old poll was removed in the first place.
It might be a good idea to just show more "neutral" statistics, ie. average and standard deviation (for those who understand what it means). The first value gives a picture of the overall opinion, and the second one how much people disagree on it.
So uh, I haven't followed this thread and have a question: are the ratings transferred to the publication if the submission is published?
I made the liberty of choosing that they are indeed transferred to the publication if the submission is published.
The ratings can be changed after that, of course, but it is done through the old means of rating, not through forums.
You'll need to make it smaller so it doesn't stretch the entire forum. Perhaps having drop boxes like we do when we rate published movies would be a better idea.
Joined: 11/18/2006
Posts: 2426
Location: Back where I belong
I didn't have any problems, but I can see how smaller screens would have the format all chopped up.
One request: Would it be possible to view an enlargement of the entertainment/tech/average chart by clicking on it? It's rather small, and my old eyes aren't what they used to be ;)
Overall though, I like it a lot. I'm sure there will be speedbumps, but I think this is a much needed refresh, and a good improvement to the system.
Question: Will these submission rating statistics be preserved when the submission is accepted/rejected? In other words, will they be visible in the "Published movies" and "Gruefood" groups? It could be interesting to see the final stats after the publication/rejection.
Question: Will these submission rating statistics be preserved when the submission is accepted/rejected? In other words, will they be visible in the "Published movies" and "Gruefood" groups? It could be interesting to see the final stats after the publication/rejection.
They will be, but one cannot change the ratings anymore.
There seems to be error in the HTML that Opera particularly doesn't like. I'll try fixing it.
It's hard to find it though because the phpBB template HTML is broken in many other places, so finding that particular error...
The board looks better without borders (i.e. when it's broken) :P
The actual rating graphic itself could be better. It's too small and compressed. Line spacing is good.
Another question: Could you explain a bit the statistics graphic? I'm not sure it's immediately obvious how it should be read. Maybe write a little page about it?
Edit: Damn, Alden beat me to it. :)