Is there a DLL interface of any sort for writing plugins?
My goal is a memory watcher, buffer watcher and input system. My goal is image processing combined with RAM watching (and what I have in mind makes use of parallel processing).
If there isn't anything implemented or in the works, I am interested in modifying Bizhawk and adding this kind of functionality into it.
Thanks everyone
nope. hack the source code or use lua.
I don't know what a "buffer watcher" is but there's already rich memory watching capabilities in bizhawk. Whatever parallel image processing logic you have in mind sounds particular to your project.
Currently there's some deliberation on a new interface for kind-of scripting bizhawk with c# code, so that lua isn't necessary. Whether that eventually supports a DLL is a minor detail, but it will be a .net dll and not an OSX dylib. There's virtually no way you'll control bizhawk from a native dylib.
When I say dylib, I mean generic shared object, be it win32 DLL in this case. I'm a bit too used to OS X development.
I'll take a look at the source code and hack my solution then. Is all extensibility of this kind exclusively implemented in Lua script? The Lua points might give me a place to quickly inject a DLL loader.
lua's the only point of extensibility besides the one I mentioned we're planning which would if built make what you proposed redundant. why do you need a dll? just make a custom build.
It'll be a custom build then. The plan is to keep the tool as its own DLL solution so it can be applied to other projects, I want to keep the modifications to the host emulator minimum.
Unfortunately there is no real facility for parallel processing in LUA. this thread:
http://stackoverflow.com/questions/5689548/what-multithreading-package-for-lua-just-works-as-shipped
all the options for concurrency are reviewed and found lacking. It seems Lua is intended for being embedded in a single thread at the time. Anyway, the scripts run so fast as it is. In Mar I/O, it is amazing how much AI calculation gets done in each frame without impacting the framerate of the game. I guess that's possible when you're not using cpu resources running a complex game and rendering millions of polygons/sec.
Besides not knowing the difference between CPU emulating and letting the CPU take a nap while a GPU renders millions of polygons, if the framerate isnt impacted by running the AI, then it's because the game's computation is dominating the process. That's the opposite of the conclusion you drew.
The article you linked is pretty pathetic. LuaThread is dismissed because he doesn't like it. But you say theres no real facility. I assure you, if someone wants to do this in parallel, they can.