Posts for SatoshiLyish

Experienced Forum User
Joined: 3/14/2005
Posts: 43
YES vote, hands down! This run was thoroughly entertaining to watch. I haven't been this entertained since the 120-star and 16-star runs of this game. Its hard to pick a favorite moment..all the coin collecting runs were really cool. And the Bob-omb tricks were hilarious. XD
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Voted No. It doesn't even complete on the hardest difficulty!
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Noob Irdoh wrote:
kaizoman666 wrote:
Bobo the King wrote:
http://tasvideos.org/2873S.html
Hm, this needs 34 more.
To be fair, that submission had been on the workbench for 57 days, 23 hours, 38 minutes, and 13 seconds, so many people had the chance to vote on it over the weeks, even ones who log in only once per month or something.
Added my vote to the pot (YES btw) These game breaking glitches have limited entertainment value for me, but they're still fun to watch.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Voted yes. I liked a lot of the new strategies used in rooms that's normally not possible in a normal run, and the method of scaling out of Draygon's room without the X-Ray Scanner (tuck and stretch swimming!). I also liked the stylistic approach in some areas, like how Mother Brain was finished off.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
updated a bunch more charts. I'm too lazy at the moment to add details, but basically a lower ref + TESA is more efficient and smaller than high ref + UMH. The last picture with subme 10 is the most efficient I've gotten the encoder so far.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
moozooh wrote:
SatoshiLyish wrote:
Subme 5 is the best performer here, saving ~30 seconds over the higher subme while still (somehow) being smaller than the rest.
What the hell?..
lol, it could entirely just be this particular clip that got a smaller size..your reaction is funny nonetheless. Anyway I'm strictly dealing with lossless here..any q>0 and I'm certain that value will change in regards to quality.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
moozooh wrote:
Large keyint values should be able to provide a lot more benefit on less complex scenes. Also, you should keep in mind that this setting determines the maximum amount of frames encoded without a keyframe, not minimum. Besides, modern decoders are able to use non-keyframes as a reference as well. Thus, in practice one never needs to wait for a minute during seeking even with abnormally high values when using an up-to-date decoder. If it isn't up-to-date, or is hardware-based (iPod), or not easily updateable (consoles), problems may occur.
Well, by default, the min is set to keyint/10, so it scales automatically. I threw in those theoretical scenarios because, with ffdshow, it took a few extra seconds for me on the higher keyint clips to seek than the low keyint...and I remember a few years back (or maybe months? I don't really remember) having garbled video in the past...could've just been a different codec though..not really sure. But thanks for the info.
Flygon wrote:
I'd personally suggest never putting the keyint above 600, I use VLC as my testbed for video encoding... and I even get some minor issues with that amount, depending on the game. Anything higher and I'm very certain that the risk of seeking issues in that player heighten significantly. I know VLC is terrible, but unfortunately, other people use it for some obscene reason, so I end up using it myself.
years ago when I was comparing VLC to MPlayer, the video quality was a lot better in VLC (with DVD material) vs MPlayer...of course, those issues might have been fixed with Mplayer by now. Anyway, that's my reason for using VLC (the interface/drop down menus annoy the hell out of me though)
Experienced Forum User
Joined: 3/14/2005
Posts: 43
fsvgm777 wrote:
Me playing Kirby's Adventure (NES) until the end of 7-1: Link Note: You need the Cf (Canadian French) version of the ROM, because I'm not sure if it will sync with the U rom.
lol, you were playing this in IRC the other day, weren't you?
Experienced Forum User
Joined: 3/14/2005
Posts: 43
just updated with Keyint values. Maybe I'll work on subme next.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Nice work! Thanks to your help I can skip testing UMH/Esa/Tesa and do something else. I think I'll work on either subme or keyint ranges..were you planning on doing anything next in particular, moozooh? Also, I finished my ranges with merange..I'll plug my data into a spreadsheet and update soon. Also, Aktan, have any suggestions for clip tests?
Experienced Forum User
Joined: 3/14/2005
Posts: 43
moozooh, something about your mkv encode I noticed, the very first frame is stamped with "seeking to frame 22295". It would make a miniscule difference I'm sure, but I thought I'd point that out. ^^; (last frame is stamped too)
Experienced Forum User
Joined: 3/14/2005
Posts: 43
moozooh wrote:
Test system spec: Intel Core i5-750 (4 cores @2.67 GHz, everything on defaults); 4 GB dual-channel DDR-1333 (not that it really matters); Windows 7 64-bit; x264 rev. 1602 64-bit from http://x264.nl/ with threads=auto. Sample length: 3600 frames. The Gritty Details, vol. 1: me=esa vs. me=tesa. (Going to update it shortly with some missing values. Also, the rightmost column should read "speed", not "time".) Conclusion #1: the quality/size difference between esa and tesa is absolutely negligible. While the speed hit is fixed regardless of the search range (~15%), improvements in size are within at most 0.3% and they happen only at the point where the whole thing is slow as hell anyways. Meaning that the time is better spent elsewhere. Conclusion #2: me_range=320 is absurd, NEVER AGAIN am I going to use this setting for anything.
haha, you have a much much faster system than mine..I'm sure you got this data a lot faster than me. I went and did other things when I got to the 160 merange. Also, I probably should organize my data much like this, as to save space. hmm...how did you do the quality measurements? subjective analysis? And what settings did you use for encode?
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Flygon wrote:
Hm... perhaps a losslessly compressed H.264 file should be supplied and an encoder decompresses it on his or her own end? It's not the most efficient method, but at least it makes sense, filesize wise.
But I'm making tests with lossless! Plus I like to think that YV12 conversion will interfere with equal comparison. Any recommended IRC chat clients? I never really got into using it, and the last time was years and years ago. -edit- n/m..I was sure Trilian had it..just found the option.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Flygon wrote:
This might sound odd, but can I actually suggest compressing the raw to inside of a highly compressed 7z file, without the audio stream inside it? Assuming it is an uncompressed raw, I'm reasonably sure it'll compress to a manageable size that'll fit in Mediafire's restrictions.
It only compresses to 212MB, with the stream without the audio being 218MB. (I used FFV1 for capture..a true uncompressed would be in the range of GBs) It really is more efficient for encoders just to record off the movie file itself. I gave the frame ranges so its easy enough to edit to that section for comparison.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Flygon wrote:
Thank you, thank you, thank you! I just cannot thank you enough for making these tests! I fully admit being too lazy to make them myself, and... well... thank you! Just a question, what exact section are you testing this on, a running section with the parallax scrolling? I'd check so myself... but, well, I actually lack a Shadow of the Beast encode on me, and lack of any decent net access isn't helping me. Also, uploading the raw to somewhere like Mediafire will be extremely helpful test material for other encoders to use with other scripts. I really can't thank you enough for this, seriously.
The Raw is 226MB. Sorry, but its not happening. :/ It is a running section..the second one, starting at the beginning of the long batch of trees, and ending at the giant hand coming out of the ground. There's eyeballs, running enemies, and stuff scrolling in the background so I thought it was varied enough to test with.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
System Specs Athlon 64 3000+ 1GB DDR 3200 RAM Windows 2000 Streamlined (No IE) x264 core:94 r1564 Source: MaTo's Shadow of the Beast run 11:56.9, frames 18695-22295. (exactly one minute duration) Script: x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint 300 --ref 15 --no-fast-pskip --mbtree --direct auto --subme 10 --trellis 2 --partitions all --me umh --merange ? --8x8dct --no-dct-decimate --output "E:\S\encoded.mp4" I thought there should be more raw data available here, for newbie encoders that want to interpret data and choose what functions they want for themselves. (and which tradeoffs are worthwhile) I also decided to pick this segment of the run because one, there's been discussion on how difficult it is to encode, and two, there's both fast moving and slow moving objects during the scene. I'm testing strictly size vs time, hence the crf=0. Quality will come at a later time. After merange 160 you really start hitting the point of diminishing returns. Sure, you start getting more efficient scanning (due to the range hitting the bounds of the video resolution, but the benefit is miniscule. ----5/26/2010 Update Script:x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint ? --ref 15 --no-fast-pskip --mbtree --direct auto --subme 10 --trellis 2 --partitions all --me umh --merange 96 --8x8dct --no-dct-decimate --output "E:\shadow\encoded?.mp4" Size benefit with little to no encoding time hit? Sign me up! Its interesting to note that the benefit starts to trail after about keyint 900...I'm suspecting that that's simply the source material being used, and that its a particular scene that's causing a new keyint to be drawn. So, I'm sure you could raise this value to as high as you could possibly want and see real size benefits with little encode time hit..HOWEVER, the higher this value, the messier it might get seeking to random spots in the movie (or if the decoder's smart, time to seek increases). You don't want to set this value to, say, 3600, and have the viewer try seeking to a spot and potentially having him/her watch up to 60 seconds worth of garbled picture (or wait 5-10 seconds as the player seeks) because he/she missed the keyint point, do you? Anyway, the general rule I've read on other boards is Keyint=FPS*10..so it would be a max of <10 seconds if they seeked to a point and happened to get garbled picture before they start to see things properly again. (on average, 5 seconds)...I think I could say you could stretch it to FPS*15, squeeze 1/2% of size, and be done with it. Should also note that some of these, like keyint 800 and 1400, are a lot slower. I think my computer hiccuped during those encodes so the values are off. I don't feel like doing a 2nd test set for these, since you can see the general line of time vs size. (maybe I should make a scatter plot graph?) ----5/27/2010 Update Script: x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint 900 --ref 15 --no-fast-pskip --mbtree --direct auto --subme ? --trellis 2 --partitions all --me umh --merange 64 --8x8dct --no-dct-decimate --output "E:\shadow\encoded subme?.mp4" This appears to have the largest factor on size so far. As such, subme 1 & 2 are NOT recommended in any scenario. Subme 5 is the best performer here, saving ~30 seconds over the higher subme while still (somehow) being smaller than the rest. But, if you were to pick a higher value, subme 10 would be the best bet with little time loss over 6 & 7, while still being in the same bracket as 8 & 9, also while potentially increasing the quality (if it wasn't lossless) over other subme...but that's for another article. Note: Looking at the MeWiki I realize the brackets exist because subme 7 & 9 also scan for B frames...since they're nonexistent, you get the same size/encoding time. ----Update: More charts! Source: x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint 900 --me ? --subme 5 --merange 16/64 --ref 8 --trellis 2 --partitions all --8x8dct --no-dct-decimate --mbtree --no-fast-pskip --direct auto --output "E:\shadow\encoded ?.mp4" DIA/HEX/UMH/ESA/TESA Comparing vs all the different types of motion sensing. merange had to be lowered to 16 due to dia and hex being capped at that value (for better comparison). Hex is only slightly slower and has better compression, so it should always be used over dia. Esa/Tesa is nearly 4x slower than umh, but even so I've gotten curious because of the total amount of time spent..so in the next charts I started comparing umh vs tesa with ref values. Source: x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint 900 --ref ? --no-fast-pskip --mbtree --direct auto --subme 5 --trellis 2 --partitions all --me umh --merange 64 --8x8dct --no-dct-decimate --output "E:\shadow\encoded ref?.mp4" Merange 64 UMH Tesa at merange 16 vs UMH at merange 64 has roughly the same time for encode, yet tesa is smaller..interesting..so I did UMH vs tesa at merange 16 with ref values as well. Source: x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint 900 --me umh –subme 5 --merange 16 --ref ? --trellis 2 --partitions all --8x8dct --no-dct-decimate --mbtree --no-fast-pskip --direct auto --output "E:\shadow\encoded ref?.mp4" Merange 16 UMH Source: x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint 900 --me tesa --subme 5 --merange 16 --ref ? --trellis 2 --partitions all --8x8dct --no-dct-decimate --mbtree --no-fast-pskip --direct auto --output "E:\shadow\encoded ref?.mp4" Merange 16 TESA Even at ref 15 UMH filesize can't go under tesa's size at ref 3..one of the many deciding points for me to switch to tesa. Source:x264 "D:\Anime\Shadow 18695-22295.avi" --crf 0 --keyint 900 --me tesa --subme ? --ref 5 --merange 16 --trellis 2 --partitions all --8x8dct --no-dct-decimate --mbtree --no-fast-pskip --direct auto --output "E:\shadow\encoded ref55.mp4" Retest: Subme 5 vs 10 using TESA Well, subme 5 being a smaller encode was mostly a fluke..since subme 10 is not much slower, I'll stick with that value. Aside from the merange 64 tesa encodes, 16464055 is the smallest file size I've reached (while still being very efficient with encoding)
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Voted yes. Surprisingly entertaining, aside from the music being a little annoying. Some of the shots are pretty cool like the all balls in one shot on Level 20. I can imagine this game having some really insane shots. Guess I should check out the no-friction version to see how crazy it is.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Flygon wrote:
About the merange, as far as I am aware, it actually caps off at 256 for NES games... I just put it at 512 because I like powers of two and that it works well for extremely basic Mega Drive games (Where it caps off at 320). Basically, from what I recall, it's pixel based. I felt I should clear this up, just in case.
What kind of an improvement is it? Is it lower file size? Better visuals? I can't imagine it being very efficient, nor very beneficial..its akin to losing your keys in a parking lot. You can scour every inch of the lot from left to right, until you find them, or you can search the areas you remember being last, and most likely find them that way in a lot less time. Both results in you just merely "finding your keys". Most game animations wouldn't even move that fast within one frame, unless it teleported or scrolled off one edge of the screen to the other. Its the only instances I can think of for needing (and that's stretching it) to use such a high value. Anyway, I did more testing with subme..the higher values significantly chops down the file size. subme 5 starts to get pretty close to 10, but I haven't tested fast motion video yet, so I'm sure there's a pretty good reason for implementing it. I'm curious about the quality with high b-frame rates, so I'll start messing with that.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Hey, what's going on in here? :D This sounds like fun, so here's my input. Just did a VS playthrough of Panel de Pon in SNES9x 1.43 v17 on Hard. Used one continue. http://www.mediafire.com/file/tyzz5niywta Ugh, I'm so unbelievably rusty at this game. On the first stage I was simply stupefied as to what to do, I haven't played in so long. And I call this my favorite puzzle game too.. :/ Well, watch and enjoy as I start to get back into my groove. I used to be able to beat this on Very hard with one-two continues.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
sgrunt wrote:
To note: I originally came up with those settings for deldup; it was the first set I came up with that dealt reasonably well with the two videos for which problems had previously been observed (Bisqwit's no-friction Lunar Ball run and my CK5 run). They could possibly be fine tuned further.
hmm..granted I've only tested with low motion/similar colored movement in NES Zelda, so I'll look into those runs for further testing. I started looking into other values, like subme and merange. I do feel 512 for merange is excessive, unless I know the character will wrap around to the other side of the screen (LoZ 2nd Quest Glitched as an example) . That also depends on if the encoder is scanning the range of a pixel on the edge of the screen, if it compensates the range of pixels off the edge by wrapping around to the other side. Subme..sounds a lot like the pel option in AviSynth for MVAnalyse. Is it really necessary to scan subpixels for games with bitmap material? I could see it being useful for polygonal games, but not for games on older systems like SNES or Gameboy.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
quote="Aktan"]Another proof that x264 doesn't have 4:4:4 yet is this: http://x264dev.multimedia.cx/?p=332 One of the projects is to add that support![/quote] Sounds good to me! But for now, I'll just work with what I have until it becomes supported. Okay, to get back on track with the thread topic, I've been studying the options for the deldup function. From what I can understand, Flygon's implementation would be minfps=0.1:lthresh=0.1:mbthresh=100:mbmax=1:cthresh=0.1. minfps is simple enough. 0.1fps would be one frame per 10 seconds. This is most likely the culprit of the strange tracking Flygon experienced in VLC, but I don't think that that in itself should be that big of an issue..if it is 1 should work perfectly fine. Lthresh..would be the threshold of the luma (Y) channel. If I'm understanding correctly, the addition of every pixel's SAD will be up to this value. Its defined by X*Y*Lthresh, so, if the value is 0.1 that would mean its 10% of the total amount of pixels. I ran some tests on this, from 100 jumping all the way down to 0.00001, and to be honest I don't think this value means a whole lot. MBthresh has a greater factor in terms of what frames gets dropped. MBthresh and MBmax..if the number of 8x8 blocks in the image with SAD larger than MBthresh (100) exceeds MBmax (1), then the next frame is a unique image and is kept. In the documentation I'd like to point out that it specifically says it has to "exceed" MBmax (technically says MBThresh, but I'm sure that's a typo), so I think an MBmax value of 0 would be better, since the current values would dictate that you'd need two blocks to be different. Here I'll say that only when I set LThresh to 0.00001, with MBThresh at 100, that the encoder kept an excessive amount of frames. (0.00002 still seemed to still work properly) I think it had to do with SAD being under 1 total as to why it wasn't working. CThresh is the threshold of the chroma (U & V) channel. Works mostly the same way as LThresh, only when I tried disabling Lthresh to test Cthresh only, it wouldn't encode. I was considering maybe you wouldn't really need this value and you can assign it a negative in order to disable it and increase encoding speeds, but since SAD calculation appears to be very fast with vast implementation, the benefit would be very minimal. There's also the chance of two colors having the same Luma value, so it wouldn't be the wisest thing. Anyway, I think adequate values for C & LThresh would be 1, since it'd equal the amount of available pixels..anyway because of MBThresh being so low the number doesn't have to be excessively large to begin with...assuming I understand how SAD works (is it just, for CThresh as example, one pixel's chroma minus other pixel's chroma value and stored as an absolute?). MBThresh..it all depends on how sensitive you want the filter to be in keeping non-duplicate frames. Since I work with FFV1 (lossless RGB32 compression), assigning it to 1 would make it the most sensitive to any changes in the video, but I could very well see it keeping all the frames in an already compressed video. More tests would need to be run for this, but I think the current value of 100 works fine.
Post subject: Re: VBjin svn61 released
Experienced Forum User
Joined: 3/14/2005
Posts: 43
adelikat wrote:
New release of VBjin. Changelog: ROM loading from commandline (which means movie loading now possible as well) added Lua functions - memory.readbyte, memory.writebyte, memory.readword, memorywriteword MusicSelect?.lua - a script that allows the selecting and playing of music tracks in a game Recent menu items enable sound on load (not just the open menu) View->Mix Left & Right View options as well as other display options Wave file logging RAM Search - fix update previous values RAM Search - redraw the list when search size/format is changed Runs faster! (capable of full fps) Most of these fixes come from ugetab, so much thanks to him.
Is this for Windows XP only? I can't seem to get it to run on my system, and I get an invalid Win32 application error. (Windows 2000)
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Aktan wrote:
Using AviSynth 2.60 Alpha I do the following lines: ConvertToYV24(chromaresample="point") ConvertToYV12(chromaresample="lanczos4")
Thanks, I'll check it out after work today.
Aktan wrote:
Ah, but it always says that even when encoding with YV12 source. I really do think the increase in file size is due to garbage data. Garbage data can be hard to encode =p.
Ugh, I must have selective vision or whatever..I see it popping up with the normal material now. So much for that :/ ...if only there was a proper decoder so I could verify for sure.. *just realizes FFVI had 444P output*..*does a quick encode to rule out ffdshow's faulty decoding*..bah, x264 downsamples to yv12. Least I know for sure ffdshow was reconverting back to YV12 After all that mess. -_-;
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Aktan wrote:
Unfortunately that thread, (which was started by me, right?) is a bit outdated now. After finding out out the different ways I could convert to YV12 from RGB32, in which some retains more or less color, I don't think encoding to lossless can be as small as it used to be. Apparently the way I converted to YV12 before was a simple method making it easier to encode at the cost of the colors are not as accurate. The method I now use to convert to YV12 now retains (all subjective of course) more color, but making the file size larger in which I've noticed most of the time it be smaller to just go lossy.
I would be very interested in your conversion method. Even if there were YV24 decoders I would have to go all the way down to q40-45 in order to get the same file size as the 4:2:0 material at q20..and in general it ended up looking worse (compression artifacts). Maybe 2x Resolution is the only real way to go about it..and if I have to use 4:2:0 I'd rather get the colors accurate due to a better dithering method.
Aktan wrote:
I don't think this works. Even if you convert to YV24 in avisynth, x264 auto converts it to YV12 (at least in the newer versions). I think you are getting the errors you see because x264 assumes it's YV12 and encodes as such. I'm not sure though. I'll do some of my own testing.
Well, when I started encoding I got a "4:4:4 Predictive" popup, so I'm assuming it was reading and encoding as YV24 material. Also, where is all the extra space coming from in the file sizes? I somehow doubt its all junk data in there.
Experienced Forum User
Joined: 3/14/2005
Posts: 43
Aktan wrote:
I was the one who found the bug that lossless mode still had b-frames back in 2008, so I know the fix was to just ignore it. Plus the encoding generally is faster doing lossless than not (because lack of b-frames) which is why Flygon would even do these insane settings. These settings with b-frames would be a lot slower.
Very interesting. Also, I think I'll have to take back what I said about lossless being pointless since it wasn't true lossless. I've been reading up on the "Encode to Lossless x264" thread, and I didn't realize that it was possible to get lower file sizes than some q>0 encodes (my experience with all other encodes and codecs proved to be otherwise). I guess anything that increases the quality:size ratio is worth it.
Aktan wrote:
Usually I do upload near lossless to YT (YT doesn't support lossless H.264), so what moozooh said is correct. Lossless does have the advantage of having no lossy artifacts also, dispite the color loss =p.
With Youtube I try to approach the max upload size per clip, but I won't go out of my way wasting space just to do so. ^^; Anyway, I think the best way to preserve your settings is to upload an .flv instead. (currently experimenting with that..I want to somehow get 60fps video on there..which is strange because I remember it used to work..probably when HD came out that changed things) Also, I HAVE figured out a way to encode 'true' lossless to x264, and all that's required is downloaded the latest AVISynth 2.6.0 beta (or alpha?) and using the following script: source=ConvertToYV24(ffdshow_source()) return (source) However, there's currently no decoder that will process the video correctly. What you get is the full luma displaying correctly, with the top left quarter of the chroma displayed on top of it. Strange looking to say the least. The only thing I could see this site using it for is for "future archiving" when yv24 decoding becomes more mainstream/supported. File sizes are a lot bigger. 4:2:0 q0 is 1.9MB, q1 1.6MB; 4:4:4 q0 6.71MB, q1 8.28MB. LOL, guess that just proved lossless encoding efficiency right there.