Here it is. I've picked a stage full of flicker and colors.
Aktan wrote:
This is another reason why I asked, "Are you sure YT quality is great?" YouTube has been known to cut the vertical resolution in half, and then up scaling it again, effectively losing 50% of the vertical resolution.
I don't think so, my friend.
I did a test. I've made this 1920x1080 picture (look closely), and I shoved it in a short uncompressed AVI file.
The results are here. If you have a 1080p monitor/TV, set this video to 1080p and look closely. I can distinguish all the black lines from the pink ones. If you were right, I would've seen a single color. Am I right?
Joined: 6/25/2007
Posts: 732
Location: Vancouver, British Columbia, Canada
Mister Epic wrote:
Aktan wrote:
This is another reason why I asked, "Are you sure YT quality is great?" YouTube has been known to cut the vertical resolution in half, and then up scaling it again, effectively losing 50% of the vertical resolution.
I don't think so, my friend.
I did a test. I've made this 1920x1080 picture (look closely), and I shoved it in a short uncompressed AVI file.
The results are here. If you have a 1080p monitor/TV, set this video to 1080p and look closely. I can distinguish all the black lines from the pink ones. If you were right, I would've seen a single color. Am I right?
This is another reason why I asked, "Are you sure YT quality is great?" YouTube has been known to cut the vertical resolution in half, and then up scaling it again, effectively losing 50% of the vertical resolution.
I don't think so, my friend.
I did a test. I've made this 1920x1080 picture (look closely), and I shoved it in a short uncompressed AVI file.
The results are here. If you have a 1080p monitor/TV, set this video to 1080p and look closely. I can distinguish all the black lines from the pink ones. If you were right, I would've seen a single color. Am I right?
I see your lines, but the color is completely wrong after the YouTube transcode.
Well... COMPLETELY is too much. It's still pink, right?
But it still proves my point. YouTube doesn't cut the vertical resolution in half.
Joined: 6/25/2007
Posts: 732
Location: Vancouver, British Columbia, Canada
The YouTube-ruined version is significantly less vibrant than the original. I realize that's color space conversion, but it looks like it went through multiple conversions, causing the color to lose the majority of its luster. Isn't there a brighter magenta in that other color space that's closer to the original? I have a feeling there is and YouTube's color space conversion algorithm isn't getting as close to its mark as it could.
Of course, your original point has been proven. I still wonder what happened with my video, though. It seemed to have its vertical resolution cut in half. Perhaps YouTube has fixed that aspect of their transcoding since my video was transcoded.
Of course, your original point has been proven. I still wonder what happened with my video, though. It seemed to have its vertical resolution cut in half.
Your video was uploaded on March 2nd, 2011. Maybe YouTube has improved since then.
I don't think so, my friend.
I did a test. I've made this 1920x1080 picture (look closely), and I shoved it in a short uncompressed AVI file.
The results are here. If you have a 1080p monitor/TV, set this video to 1080p and look closely. I can distinguish all the black lines from the pink ones. If you were right, I would've seen a single color. Am I right?
Nope, since it kind of depends on the source. Aka it won't always happens. Your static screen is too easy to compress. Did you not see Lex's example? That completely shows it.
This is what I get on my screen from YouTube:
http://img193.imageshack.us/img193/2966/ytoutput.png
Edit: I should mention, MisterEpic, you should post a png of what it looks like AFTER colorspace conversion to YV12. RGB picture is hard to compare.
A quick question - all of this seems to be based on the fact that youtube... sucks. Seriously, seriously, sucks.
It "sucks" because it only supports 30 FPS? I don't see the connection.
Take into account that YouTube's player is the lightest flash-based video player I have encountered (playing a video consumes something like 30-50% CPU on my system with YouTube, while the video player of basically any other video sharing site consumes at least 80% of CPU), but it's still quite heavy. I'm pretty certain that if it tried to play a video at 60 FPS the system requirements would almost double, making it unplayable in most older systems. Not everybody has a 64-bit quad-core.(Another issue would be that the bandwidth requirement would also increase. Not everybody has a gigabit internet connection.)
YouTube's main purpose is to share home videos. Most cameras can't even record over 30 FPS. What possible advantage would there be in YouTube supporting framerates higher than that?
From the perspective of the original consoles, take also into account that TV sets were also 30 FPS (in NTSC systems) or 25 FPS (in PAL systems). The game switching the visibility of a sprite on each frame (in other words the visibility flag changing 60 times per second on NTSC) wouldn't have caused the sprite to flicker on screen. It would have caused the sprite to be seen only on each other scanline (because TVs use interlacing). There would have not been a visible flicker.
From the perspective of the original consoles, take also into account that TV sets were also 30 FPS (in NTSC systems) or 25 FPS (in PAL systems). The game switching the visibility of a sprite on each frame (in other words the visibility flag changing 60 times per second on NTSC) wouldn't have caused the sprite to flicker on screen. It would have caused the sprite to be seen only on each other scanline (because TVs use interlacing). There would have not been a visible flicker.
I'm not sure if the flickering would be completely invisible, but it's not true that console games are interlaced (most of the time).
Normally a TV draws every second line, then shifts the vertical position a bit and draws every odd line. The video signal controls when this happens. If consoles (such as the SNES in their default "progressive" mode) reset the position after every half-frame, the odd lines are drawn over the even lines and you get a 60fps picture with halved resolution and black scanlines.
Actually, both NES and SNES does a trick to draw the same line at 60Hz. So in a sense it's really 240p and you really do get a progressive picture. This is where scanlines come from.
Some information on it:
http://scanlines.hazard-city.de/
Edit: And I just noticed creaothceann explained it above
Take into account that YouTube's player is the lightest flash-based video player I have encountered (playing a video consumes something like 30-50% CPU on my system with YouTube, while the video player of basically any other video sharing site consumes at least 80% of CPU), but it's still quite heavy. I'm pretty certain that if it tried to play a video at 60 FPS the system requirements would almost double, making it unplayable in most older systems. Not everybody has a 64-bit quad-core.(Another issue would be that the bandwidth requirement would also increase. Not everybody has a gigabit internet connection.)
YouTube being a light player has nothing to do with YouTube. All flash player are based from Adobe themselves. You did not download something special from YouTube to playback their videos. All YouTube made was the GUI for the "flash player." Also, you should try some of the streaming videos from Archive and see how well you can play 60 FPS. Another thing, about the bandwidth increase, since we can control the settings in our Archive streams, I also know for a fact that most of the time the bandwidth needed with our made encodes are smaller than what was used on YouTube. This is mostly due to the fact that we spend the time to optimize the size, while YouTube has to do a quick pass.
Ok so I'd like to get to the point I'm looking for.
Are AviSynth encoders going to use this function in their future YouTube encodes? Of course, it will depend on the game and its flickers. I won't use it on Metal Slug 4 for example.
function TASBlend(clip c) {
#Thanks to creaothceann for this function!
Interleave(Layer(SelectEvery(c, 4, 0), SelectEvery(c, 4, 1), level=int(round((2.0 / 3) * 257))),
\ Layer(SelectEvery(c, 4, 2), SelectEvery(c, 4, 3), level=int(round((1.0 / 3) * 257))))
}
(In case you missed it, creaothceann, I had to correct some things in it.)
Ok so I'd like to get to the point I'm looking for.
Are AviSynth encoders going to use this function in their future YouTube encodes? Of course, it will depend on the game and its flickers. I won't use it on Metal Slug 4 for example.
function TASBlend(clip c) {
#Thanks to creaothceann for this function!
Interleave(Layer(SelectEvery(c, 4, 0), SelectEvery(c, 4, 1), level=int(round((2.0 / 3) * 257))),
\ Layer(SelectEvery(c, 4, 2), SelectEvery(c, 4, 3), level=int(round((1.0 / 3) * 257))))
}
(In case you missed it, creaothceann, I had to correct some things in it.)
I will probably test/use it in my future encodes, yes. Though it looks really bad in Sonic 2.
I'm guessing it's possible to apply this to only some parts of the encode?
Joined: 11/22/2004
Posts: 1468
Location: Rotterdam, The Netherlands
Aktan wrote:
YouTube being a light player has nothing to do with YouTube. All flash player are based from Adobe themselves. You did not download something special from YouTube to playback their videos.
It might be that some streaming sites use an actual streaming server to deliver content while Youtube uses actual files. In any case, Youtube loads content very conservatively (it only preloads a small amount of content quickly, and everything else after that is given minimal bandwidth until the user reaches the point where his buffer is starting to get empty). It's partly the loading of the stream that causes CPU usage (you might have noticed CPU usage drastically going down after loading is complete) so this might very well have been due to Youtube's optimizations.
Makes it easier to insert lots of changes (less typing).
Mister Epic:
Yeah I noticed it; you could also put the "c" on its own line before the Interleave call, and the "int()"s are no longer necessary.