Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Obviously, God did not create the sun, moon and the stars for the purpose of providing light, but for some other reason (to indicate times and seasons). There was light even when there was only waters and God's voice. Today, we know from science that it is possible for there to be light even when there is nothing but water and sound. Look up sonoluminescence. (Not to say that that's what God did, he might have just as well provided raw light by willing the light into existence.)
There was no prophet to measure time before a human was created, and by then, everything else was already in place for the first Shabbath. And we can safely assume God can measure time perfectly without any devices.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
There are always people who think blunt opinions are not subject to anyone's status.
To address the topic of "code golf", I intentionally reduced the code length as much as I could, because the other alternative would have been to speed up the typing even more, to the point of being impossible to follow without constant abuse of the pause button. It was already much faster than I would have liked it to be. I was aiming for 800 lines. I got 940.
If it had been over 1000, I would have probably sacrified the APU (sound support).
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Because filtered scaling (resizing) produces a myriad of tones. Not resizing it means that it would have to be cropped (portions from some edges are removed).
If you unfiltered-scale it (i.e. just choose the nearest neighboring pixel), you will indeed keep the original colors, but it will still look bad, because now part of the original content is simply discarded, instead of being blended into remaining pixels.
[img_left]http://bisqwit.iki.fi/kala/snap/mp/snaporig.png[/img_left] Left: Original picture.
Bottom: Left to right: Lanczos-scaled; Cropped; Nearest-neighbor-scaled.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Thanks Kuwaga.
You are right in that if I stopped at any point to explain something in detail, perhaps with illustrations, I would not be able to fit this code in 15 minutes.
As a compromise, I tried to cover that aspect with very verbose comments; such as the one before PPU::tick() that explains the timings and what happens in each part of the rendering process.
I script the code and the video very carefully beforehand, down to smallest details; because I'm well aware (from watching videos by others) that stopping to hesitate will not make for an entertaining video. In television/movie/theatre, actors often speak lines that are too long and too prone for enunciation errors to be ever used in real communication. Nevertheless, it is done, and it is entertaining to watch, and rarely the audience pays attention to the fact that nobody ever speaks like that. So I do the same with the code that I write. Nobody ever writes code as straightforwardly and fluently as I do in my videos, but it makes for an entertaining and educational show.
In other words, although my programming videos are all Hollywood to varying degrees, they are still scientifically accurate Hollywood.
Sometimes in my videos, I forget to type something and/or make typos. When it happens, I always pause the recording immediately to do the bug-tracking off-screen, and then quickly patch the bugs on-screen while the recording is resumed, in order to avoid spending two minutes of viewer's time trying to figure out what exactly is wrong with my code. I have gotten better with this by every video I have published, so today fewer mistakes end up in my videos. But there still was one in my NES emulator, that I had to go and fix in part 2.
Kuwaga wrote:
It's not really necessary and maybe it'd be even annoying to do to you, but I can imagine it'd make those videos more appealing as educational videos.
I think my priorities go in order of entertainment and teaching. If people are distracted by the video's pace slowing too much, and they stop following the video, it does not matter what I teach. I need views. Without views, people who would be genuinely interested of the lesson won't ever find my video. Often, interesting stuff is simply stumbled upon. Sure, people do search too, but at least for me the majority of stuff that I find interesting in the Internet is not something that I was intentionally searching for. This is why I don't stop to explain things even in the shorter programs. Usually I try to provide information without stopping, such as with voiceovers in Doukutsu Monogatari music player or with comments in MicroBlaze emulator.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Bag of Magic Food wrote:
The way I explain it is that TAS is something that could only be invented by mega-nerds.
Nerdom is, after all, almost obsessive interest towards a topic. The curiosity and the non-discouraging will. The type of personality, that sets down to bisect a problem as deep as needed, to grasp the roots of it and to understand how it grows.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Brandon wrote:
Where do you even learn how to program this well?
<scrimpy> fairly safe to say the dude likes coding things
This pretty much sums it.
Doesn't help me being professionally successful, though. I'm just a low paid web developer. And I don't even consider myself particularly talented. I look far up to folks like Ken Silverman, who created the Build engine upon which the game Duke Nukem 3D is based (while he was like 18), and Fabrice Bellard who created QEMU among a dozen other accomplishments. While I have been pursuing similar goals now and then, I have never managed to create anything that can even be spoken about in the same sentence with their accomplishments. I am always walking in the footsteps of giants.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
This was so cool. You sure this game is not a The Invisible Man game? Because the guards sure seemed completely oblivious to the hero casually walking past them while an intruder alert sounds.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Warp wrote:
I still don't understand what the point is. Such an arbitrary resolution that goes well beyond what any monitor supports. Why not make it 42400×24000 while you are at it? Bigger is better, so why not make it really big?
I use large resolutions for the videos, because my input material is multi-resolution. When your input material is of a single resolution, it only makes sense to scale it by 2.0 in both directions in order to inhibit chroma supersampling, and possibly an additional 2.0 or 4.0 in order to get YouTube to activate the 720p and 1080p quality levels which provide more bitrate and give less artifacts.
But when the material is of multiple resolutions ― in my editor I switch resolutions constantly ― the optimal resolution would be the least common multiple of the input resolutions.
If I only used e.g. 640×480 (VGA graphics mode) and 720×400 (VGA text mode), the least common multiple would be 5760×2400; for it is the smallest resolution where both input resolutions fit with integer ratios. If non-integer ratios are used, the pixels will not be preserved as sharp rectangles, but will either be blurred at their edges or be irregularly sized, or commonly, both. And to inhibit chroma supersampling, it should be 2× that both vertically and horizontally, due to the non-even 9× and 5× scaling factors respectively: 11520×4800. Obviously this kind of resolution is impractical already.
But the source material uploaded to YouTube should be as close to lossless as possible. Otherwise each further reprocessing step (as done by YouTube) only multiplies the loss already done before uploading the video.
But I do not even use as few resolutions as two. In my NES emulator coding video, all of the following resolutions were used: 640×480 656×420 656×480 672×420 672×480 672×480 688×480 720×400 720×408 720×408 720×475 720×494 752×288 752×296 752×344 752×352 752×400 752×400 752×400 752×408 752×416 752×420 752×424 752×462 752×464 752×480 752×480 752×490 752×536 752×552 768×400 768×400 768×480 784×400 784×448 784×456 784×464 784×472 784×480 784×560 800×400 800×400 800×448 800×480 800×480 816×400 816×448 816×480 816×560 832×448 832×480 832×480 848×400 848×408 848×408 848×416 848×416 848×424 848×456 848×464 848×472 848×480 848×480 848×480 848×480 848×480 848×480 848×480 848×480 848×480 848×488 848×496 848×496 848×496 848×496 848×496 848×496 848×496 848×496 848×496 848×504 848×504 848×504 848×504 848×504 848×528 848×536 848×536 848×536 848×544 848×544 848×544 848×544 848×544 848×544 848×552 848×552 848×552 848×560 848×560 848×560 848×560 848×560 880×400 880×416 880×424 880×424 880×432 880×440 880×448 880×456 880×464 880×464 880×480 880×480 880×480 896×480 896×480 912×448 912×480 912×480 928×480 944×400 944×408 944×420 944×448 944×480 944×480 944×480 944×512 944×528 944×536 944×600 976×480 992×488 992×504 992×560 1008×560 1024×560 1026×475 1040×608 1044×475 1056×608 1062×400 1072×608 1088×568 1088×608
Of course some of those resolutions were padded horizontally to preserve aspect ratio, so the list becomes slightly smaller: 852×288 852×296 852×344 852×352 852×400 852×408 852×416 852×420 852×424 852×432 852×440 852×448 852×456 852×462 852×464 852×472 852×475 852×480 866×488 870×490 880×496 896×504 910×512 938×528 952×536 960×400 960×408 960×475 960×494 966×544 980×552 994×560 1008×568 1066×600 1080×608
It is still quite a large list, and the least common multiple does not really work here (it would produce 115945931878090554240×23348390524866115315211404800). So I simply picked the resolution that I want to optimize for ― this was 848×400 ― and upscaled it with an integer of choice ― this was 5 and 6 ― producing 4240×2400.
Also, I don't think such a large sounding resolution like 4000×3000 is going to be that excessive in future. Images of that size are every day to the people dealing with print and image editing. It is possible to make good print-quality snapshots from my videos (provided that YouTube does not damage them).
Why I did not use larger size? Most people today use a desktop resolution of 1024×768. or 1280×800 [1]. According to that source, approximately 1 % of Steam gamers use a desktop as large as 2560×1440 (WQHD). In order to avoid chroma supersampling artifacts, my video should be at least double that (5120×2440). But YouTube only goes up to 4K, so that's where I try to set my cap at, approximately.
Think I am overestimating the impact of chroma supersampling artifacts? Try watching this video on 480p or smaller: http://youtu.be/8x9Ya4izFaE The green text on blue background will have artifacts in it that vary from tolerable to horrible depending on your display gamma.
For the tl/dr people, the reason is: In order to avoid minimize supersampling artifacts, in order to preserve sharp uniform-size rectangular pixels, and in order to enable the better-bitrate options in YouTube.
Finally, I should make a set of simulated comparison images some day…
EDIT: I posted part 2, the demonstration video. http://youtu.be/XZWw745wPXY ― It is still uploading, but it should be watchable at some quality level by December 8th 0:00 UTC.
I thought YouTube's 4k is 4:3, i.e. 4096 x 3072. Source: http://en.wikipedia.org/wiki/YouTube#Quality_and_codecs
In number of pixels, 4240x2400 is not greater than 4096x3072. But of course the X axis is greater.
But I also hit YouTube's buggy encoder in this video: http://www.youtube.com/watch?v=N8elxpSu9pw "Creating a raytracer". In that video, all levels except 240p and 480p, including 360p, are broken at 11:56 onwards. I uploaded that video in 3200x2400 or 1600x1200, I'm not sure which. (I had to reupload several times due to A/V sync issues, and I experimented with different containers & codecs). Smaller than 4k in any case.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
ledauphinbenoit wrote:
I think I might be most impressed by how well documented the code is. As a non-systems programmer, I could actually follow what was going on.
Thanks! I tried very hard to keep the big picture perceivable at all times without encumbering the actual coding too much with stuff that might increase the program length by a factor of 2.
ledauphinbenoit wrote:
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
creaothceann wrote:
1080p version shows graphical corruption at 02:00 and beyond :/
The corruption resolves at 02:46. More corruption is seen at 07:31. But yes, it is not the first time I see such failures at YouTube's re-encoding process.
Off-topic: I seem to lack a "sad" avatar. Number 3 is "tired" as in "I don't want to partake in this pointless bickery anymore" and 12 is "grief" as in "These events upset me greatly". Then there is 13, "bleh" as in "Whatever, who cares". None of these are good options for a light-weight passing :-( emotion.
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
nfq wrote:
I don't know anything about programming, but with your skills, wouldn't it be easy to make the PS2 TAS emulator so that it doesn't desync?
At one point -- three years ago -- I actually began creating a PSX emulator (not PS2) from scratch, but I eventually stopped. Lack of unit tests (or the knowledge how to make them, for the PSX platform) was probably the greatest reason. It began to feel like I have to implement the entire emulator before I can even begin testing it. I think I implemented all of CPU, most of COP2 (GTE), and some of COP0, before giving up. For reference, I used documentation from such sources like psx.rules.org and the PCSX source code.
For PS2 I would presume the situation is even worse.
nfq wrote:
Hey Bisqwit where does the music in your coding video come from? Because it's pretty awesome music.
Jinguji detective series. It's a Japanese-only Famicom game series, four installments total.
I intentionally chose, from NES/FC games, a soundtrack that is both a favourite of mine and relatively unknown to most of the audience.
NrgSpoon wrote:
Clearly he TASed his code input, recording his keystrokes and playing it back without any delays.
I call these tool-assisted education videos for not without a reason.
The clock was not hacked.
<Nach> as Ilari said, it reads Famtasia format, because it's simple
<scrimpy> what about .fm2, actually?
<Ilari> IIRC, .fmv is much simpler than .fcm or .fm2.
I converted FM2 files into FMV, because of the reason stated by Ilari.
Kuwaga wrote:
The opcode decoding matrix looks like magic to me. I don't really get it at all.
I am pleased that you liked it.
I figured out the minimal list of C++ expressions that can be used to implement all NES opcodes. The matrix simply chooses, using the "op" template parameter, which expressions from the list to evaluate. For example, opcode $06, "asl zp" is implemented as follows:
| addr = RB(PC++);
| t &= RB(addr+d);
P.C = t & 0x80;
t = (t << 1) | (sb << 0);
WB(addr+d, t);
| tick();
| P.N = t & 0x80;
| P.Z = u8(t) == 0;
and opcode $21, "and zx", is implemented as follows:
| addr = RB(PC++);
d = X;
addr=u8(addr+d); d=0; tick();
addr=RB(c=addr); addr+=256*RB(wrap(c,c+1));
| t &= A;
c = t; t = 0xFF;
| t &= RB(addr+d);
| t = c & t;
A = t;
| P.N = t & 0x80;
| P.Z = u8(t) == 0;
Opcode $D8, "cld":
t &= P.raw|pbits; c = t;
| tick();
t = 1;
t <<= 1;
t <<= 2;
t = u8(~t);
| t = c & t;
P.raw = t & ~0x30;
Opcode $A8, "tay":
| t &= A;
| tick();
Y = t;
| P.N = t & 0x80;
| P.Z = u8(t) == 0;
You can spot a number of similarities between these four instructions (marked with "|"). The matrix determines which lines to execute for each opcode. The matrix is evaluated at compile-time, so the code ends up being surprisingly efficient. The two latter opcodes, for example, end up as the following assembler code (inlining selectively disabled for brevity):
Ins<0xD8>:
push rbx
movzx ebx, byte CPU_P
call CPU_tick
or ebx, 0x30
and ebx, ~0x38
mov byte CPU_P, bl
pop rbx
ret
Ins<0xA8>:
push rbx
mov bl, byte CPU_A
call CPU_tick
mov edi, 0x80
and edi, ebx
mov byte CPU_Y, bl
call some_regbit_function
mov al, byte CPU_P
xor edx, edx
test bl, bl
pop rbx
sete dl
add edx, edx
and eax, ~2
or eax, edx
mov byte CPU_P, al
ret
The compiler generates a distinct function for each of these 256+3 opcodes. Pointers to those functions are placed in the function table within Op().
Surprisingly though, runtime evaluation ended up not being much slower (at least when the matrix was expressed in a simpler, although more verbose format.) Evaluating the matrix at runtime trades code straightforwardness for cache locality, but the real bottleneck in speed is in the synchronous nature of the emulator (tick() calls).
As a side-note: I uploaded this video in 4240x2400 resolution. The highest YouTube currently provides is 480p. On "my videos" it says "Published 0%". On the upload form it says "your video will be live in a moment" and "Processing 99% — Finishing processing...". Anyone know whether I can expect the 720p/1080p/Original versions to appear, like ever?
Editor, Experienced Forum User, Published Author, Active player
(296)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
And on his bbs,
Bisqwit wrote:I just watched the Cave Story TAS by Nitsuja.
I presume that you already know about the TAS, as well.
It inspired me to play the game too.[/quote]
pixel wrote:I a little watched it.[/quote]