With the advent of the Saturnus core in BizHawk 2.0 and later, a new option has made its appearance (in that specific core): Horizontal blending. If this option is enabled, BizHawk stretches the output to twice the width with some filtering applied.
I've done a bit of research regarding how the Sega Saturn handled transparency (I really recommend reading the article and watching the video):
http://www.mattgreer.org/articles/sega-saturn-and-transparency/https://www.youtube.com/watch?v=f_OchOV_WDg
As it turns out, many Saturn games go for a mesh approach to handle transparency. This consists of drawing every other pixel of a sprite to simulate a transparency effect. This is where it gets interesting: On composite output, the mesh is blurred in such a way that e.g. the spotlights in Mega Man X4 appear transparent. However, the mesh is clearly visible on S-Video or RGB output.
Conversely, I'm currently handling the publication of a Saturn game called Tryrush Deppy. I dumped it once without horizontal blending and once with horizontal blending. Now, I'm going to show 6 screenshots with different approaches, with pros and cons:
Original dump without horizontal blending:
Pros: Picture is crystal clear
Cons: Mesh is clearly visible
AviSynth code: N/A (well, aside from opening the AVIs)
No horizontal blending, blur filter applied via AviSynth:
Pros: No mesh visible
Cons: Picture is obviously blurry
AviSynth code: Blur(1.00)
No horizontal blending, horizontal blur filter applied via AviSynth:
Pros: No mesh visible
Cons: Picture is obviously blurry (though not as blurry as the above one).
AviSynth code: Blur(1, 0) #note the comma instead of the period
Horizontal blending, resized back to original width with LanczosResize:
Pros: Mesh is less visible.
Cons: Picture isn't as clear.
AviSynth code: LanczosResize(330, 240, taps=2)
Horizontal blending, resized back to original width with PointResize:
Pros: Picture is clear, mesh isn't as visible as in the non horizontally blended one.
Cons: Some parts might appear overly squished, due to the nature of PointResize.
AviSynth code: PointResize(330, 240)
So, what would be the best approach for handling the mesh effect present in many Saturn games?
I myself am torn between no horizontal filtering with absolutely no blurring applied and either one of the two horizontal blending ones.
Something of note is that footage I found on YouTube (SGDQ 2017 speedrun, full playthrough) have the mesh clearly visible.
Warning: When making decisions, I try to collect as much data as possible before actually deciding. I try to abstract away and see the principles behind real world events and people's opinions. I try to generalize them and turn into something clear and reusable. I hate depending on unpredictable and having to make lottery guesses. Any problem can be solved by systems thinking and acting.
After point-resizing the original non-horizontally blended dump to a factor of 8 or 16, then resizing it back to the original resolution with AreaResize, I still got a very visible mesh. As for the horizontally blended one, I got a result very close to the 5th screenshot (after first point-resizing it to a factor of 8, then downscaled straight to 330x240 with AreaResize):
The following is with first downscaling to 660x240 with AreaResize, then point-resizing it to 330x240:
I'm....not sure if I like the end result (especially since it quadruples the amount of frames). Like, it ends up being a bit blurry as the end result. (it also results in the video being kinda shaky, but that might be because the video is 59.88 FPS normally, and conventional monitors usually have a refresh rate of around 60 Hz)
Joined: 4/17/2010
Posts: 11468
Location: Lake Chargoggagoggmanchauggagoggchaubunagungamaugg
Does anyone have a guy with a Saturn and a CRT anywhere? I haven't yet watched the video, probably it clearly shows that mesh is completely blended and half-transparent, but having a real TV photo and human experience will help a lot.
If it's in the video already, ignore me.
Warning: When making decisions, I try to collect as much data as possible before actually deciding. I try to abstract away and see the principles behind real world events and people's opinions. I try to generalize them and turn into something clear and reusable. I hate depending on unpredictable and having to make lottery guesses. Any problem can be solved by systems thinking and acting.
From the video:
Mega Man X4 (Saturn version):
(click to open a Dropbox page where you can enlarge)
Virtua Fighter Kids:
(click to open a Dropbox page where you can enlarge)
Because the mesh is visible with S-Video and RGB, and the console officially supported these two output types, there should be no reason why the original dump should not be used.
Joined: 4/17/2010
Posts: 11468
Location: Lake Chargoggagoggmanchauggagoggchaubunagungamaugg
Does the motion with that last method look blurred too? I mean, while the screen and the objects are moving, is it obvious that the whole thing is filtered, or it mostly resembles the clean footage?
I think none of the methods used allows to only filter the mesh itself. Probably that might be hacked into the emulator core itself. Until then, I wouldn't object against mesh too much, but I want to give the filter a chance as well.
Warning: When making decisions, I try to collect as much data as possible before actually deciding. I try to abstract away and see the principles behind real world events and people's opinions. I try to generalize them and turn into something clear and reusable. I hate depending on unpredictable and having to make lottery guesses. Any problem can be solved by systems thinking and acting.
Joined: 4/17/2010
Posts: 11468
Location: Lake Chargoggagoggmanchauggagoggchaubunagungamaugg
Yeah. The filtered one looks interesting, but sharp pixels is just what most of us expect to see from a tasvideos encode.
Warning: When making decisions, I try to collect as much data as possible before actually deciding. I try to abstract away and see the principles behind real world events and people's opinions. I try to generalize them and turn into something clear and reusable. I hate depending on unpredictable and having to make lottery guesses. Any problem can be solved by systems thinking and acting.
Does the motion with that last method look blurred too? I mean, while the screen and the objects are moving, is it obvious that the whole thing is filtered, or it mostly resembles the clean footage?
I think none of the methods used allows to only filter the mesh itself. Probably that might be hacked into the emulator core itself. Until then, I wouldn't object against mesh too much, but I want to give the filter a chance as well.
There is no motion blur because it works spatially (in 2D), not temporally. The problem is that games with very small text will have that blurred.
Example: Castlevania
Language: AviSynth
Open("castlevaniasotn-tas-maria-arandomgametaser.mkv")
sh1 = StackHorizontal(Method_0.RightBorder, Method_1.RightBorder, Method_2).BottomBorder
sh2 = StackHorizontal(Method_3.RightBorder, Method_4.RightBorder, Method_5)
StackVertical(sh1, sh2)
FullScreen(1920, 1080) # enter your screen size; intended for viewing with MPC-HC (or other players) in fullscreen
function Method_0(clip c) {c .PointResize(c.Width * 2, c.Height * 2).Subtitle("original (no change)" , text_color=$FFFFFF)}
function Method_1(clip c) {c.Layer(c, level=128, x=1) .PointResize(c.Width * 2, c.Height * 2).Subtitle("method 1 (add left neighbor pixel at 4/8 intensity)" , text_color=$FFFFFF)}
function Method_2(clip c) {c.Layer(c, op="lighten", level=160, x=1) .PointResize(c.Width * 2, c.Height * 2).Subtitle("method 2 (lighten with left neighbor pixel at 5/8 intensity)" , text_color=$FFFFFF)}
function Method_3(clip c) {c.Layer(c, op="lighten", level= 64, x=1).Layer(c, op="lighten", level= 64, x=-1).PointResize(c.Width * 2, c.Height * 2).Subtitle("method 3 (lighten with left and right pixels at 2/8 intensity)", text_color=$FFFFFF)}
function Method_4(clip c) {c.Layer(c, op="lighten", level= 96, x=1).Layer(c, op="lighten", level= 96, x=-1).PointResize(c.Width * 2, c.Height * 2).Subtitle("method 4 (lighten with left and right pixels at 3/8 intensity)", text_color=$FFFFFF)}
function Method_5(clip c) {c.Layer(c, op="lighten", level=128, x=1).Layer(c, op="lighten", level=128, x=-1).PointResize(c.Width * 2, c.Height * 2).Subtitle("method 5 (lighten with left and right pixels at 4/8 intensity)", text_color=$FFFFFF)}
function FullScreen(clip c, int Screen_Width, int Screen_Height) {
c
x = (Screen_Width - Width ) / 2
y = (Screen_Height - Height) / 2
AddBorders(x, y, x, y)
}
function Open(string f) {
v = DSS2(f, pixel_type="RGB32")
a = DirectShowSource(f, video=false)
AudioDub(v, a)
}
function BottomBorder(clip c) {c.Crop(0, 0, 0, -1).AddBorders(0, 0, 0, 1, color=$FFFFFF)}
function RightBorder(clip c) {c.Crop(0, 0, -1, 0).AddBorders(0, 0, 1, 0, color=$FFFFFF)}
function Show(clip a, clip b, string "text_a", string "text_b") {
text_a = default(text_a, "")
text_b = default(text_b, "")
a = a.ConvertToRGB32
b = b.ConvertToRGB32
c = Subtract(a, b)
x = a.Width / 2
y = a.Height / 2
a1 = a.Crop(0, 0, 0, -y).Crop(1, 1, -1, -1).AddBorders(1, 1, +1, +1, $FF0000).Subtitle(text_a, text_color=$FFFFFF)
a2 = a.Crop(0, +y, 0, 0).Crop(1, 1, -1, -1).AddBorders(1, 1, +1, +1, $0000FF)
b1 = b.Crop(0, 0, 0, -y).Crop(1, 1, -1, -1).AddBorders(1, 1, +1, +1, $FF0000).Subtitle(text_b, text_color=$FFFFFF)
b2 = b.Crop(0, +y, 0, 0).Crop(1, 1, -1, -1).AddBorders(1, 1, +1, +1, $0000FF)
c1 = c.Crop(0, 0, 0, -y).Crop(1, 1, -1, -1).AddBorders(1, 1, +1, +1, $FF0000).Subtitle("top" , text_color=$FFFFFF, halo_color=$FF0000)
c2 = c.Crop(0, +y, 0, 0).Crop(1, 1, -1, -1).AddBorders(1, 1, +1, +1, $0000FF).Subtitle("bottom", text_color=$FFFFFF, halo_color=$0000FF)
a = StackVertical (a1, a2)
b = StackVertical (b1, b2)
c = StackHorizontal(c1, c2)
StackVertical(StackHorizontal(a, b), c)
}
Result (view in fullscreen / download and view with fullscreen viewers like IrfanView)
[Picture 01] Method_1 completely removes the mesh but blurs the small text (e.g. the "START" button).
[Picture 02] The "lighten" methods preserve the text, but cannot remove the mesh completely. (They also make the picture a little bit brighter.)
[Picture 04] As you can see the algorithm works purely horizontally.
[Picture 05] Removing static meshes is the most important job of such a filter. As said above the "lighten" filters fail here.
For games where the meshes are only used for small, fast-moving objects (like Maria in the examples) I wouldn't apply a filter.
I suspect that what we'd need is a "bloom" filter which makes only the bright pixels bleed into the dark pixels.
f = "castlevaniasotn-tas-maria-arandomgametaser.mkv"
DSS2(f, pixel_type="RGB32")
v0 = Method_0.Subtitle("original" , text_color=$FFFFFF)
v1 = Method_1.Subtitle("add left neighbor at 50% intensity", text_color=$FFFFFF)
v2 = Method_2.Subtitle("convolution" , text_color=$FFFFFF)
v3 = Method_3.Subtitle("bilinear resize" , text_color=$FFFFFF)
Interleave(v0, v1, v2, v3)
function Method_0(clip c) {
c
PointResize(Width * 3, Height * 3)
}
function Method_1(clip c) {
c.Layer(c, level=128, x=1)
PointResize(Width * 3, Height * 3)
}
function Method_2(clip c) {
c
GeneralConvolution(0, "
0 1 0
1 0 1
0 1 0")
c.Layer(last, level=128)
PointResize(Width * 3, Height * 3)
}
function Method_3(clip c) {
c
BilinearResize(Width * 3, Height * 3)
}
(Note that all methods except #3 don't modify the size of the picture.)
Result: http://imgur.com/a/IpX59
It removes the mesh, but keeps the text more readable by strengthening vertical structures. It also works vertically (so there's also a bit of vertical blurring). I don't like how it makes the text a bit darker even when it shouldn't be, but that's for another day.
EDIT: Unfortunately it's not suited for dithering that consists of columns only, which can be seen in the very first post in the bottom left corner.