Posts for Warp


Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
What I have come to appreciate in the last year or two is something that many programmers have a huge prejudice against (for reasons I cannot understand anymore): prefixing your variable names descriptively. Nowadays I prefix all member variable names with an 'm', constants with a 'k' and variables inside namespaces (named or unnamed) with a 'g' (which stands for "global"; they are "global" in the sense that they are not local to any function or class, although not really in the global namespace, but always local to either the current compilation unit or a named namespace). I don't prefix variables local to a function in any way, as I feel that the very lack of prefix is in itself indicative that they are in the local scope. (Some programmers prefix even these, with an 'a'.) (Variables with the 'g' and 'k' prefixes are in fact usually at the same scope, ie. a nameless or named namespace. Their difference is that the 'k' indicates that the variable is a constant. If it's of integral type, it can be used where compile-time constants are expected. I usually don't prefix local constants with a 'k', unless I want to emphasize that it's being used as a compile-time constant.) I have found that this simple naming convention really makes my code more readable to myself, especially after months or even years. It's much easier to look for variable declarations when you see from its name whether it's a member variable, a variable local to the current function, a variable in a nameless namespace, or a constant.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
The "using namespace std;" is a really, really bad habit that approximately 100% of beginner C++ programmers have (the percentage begins to lower the more advanced the programmer is). This sometimes goes to such ridiculous extents that you may even see a short example code which has a "using namespace std;" in order to save writing one "std::" prefix in the later code (thus the total amount of writing that was saved was -16 characters). What most C++ programmers (beginner and intermediate) don't seem to realize is that the "std::" prefixes actually make the code easier to read, not harder. For example, suppose you have a line of code like this:
search(i1, e1, i2, e2);
Is that calling some local function, perhaps some member function of the current class, a function defined in some of the header files included in this file, or maybe a function in the standard library? It's impossible to say from that line alone. However, suppose the line was written as:
std::search(i1, e1, i2, e2);
Now there isn't even a shadow of a doubt what this function is: A function in the C++ standard library. Even if you don't know what it does, it's easy to find info about it. I can think of only one valid usage for "using namespace", and that's if you want to import some custom namespace into another, like:
namespace SomeNamespace { using namespace AnotherNamespace; }
Any other use is horrible programming.
EEssentia wrote:
I would also point out a few things in the code... Use typedefs for pointers! The syntax looks real messy if you do not.
That should not be taken as a general principle. Sometimes substituting pointers with typedeffed names can only make the program harder to read because it will be harder to visually distinguish between pointers and non-pointer types.
Do not use function pointers - use functors!
Functors can only be passed as parameters to template functions, and consequently cannot be used in every possible situation where a function pointer is required. (Also functors are more verbose to write, at least until the next C++ standard gets ratified and implemented widely.)
Furthermore, stripping the names of parameters in declarations is bad. Bad, bad,
It's not always bad. For example, if you have eg. a member function named "setXCoordinate(int)" or "setModifiedFlag(bool)", those don't really require the parameters to be named. They are rather self-evident even without. (Also many people leave parameters of private function declarations unnamed to reduce the visual clutter of header files.)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
ZeXr0 wrote:
I don't even use extension in Firefox, can you tell me what extension you find vital for a great browsing experience ?
Firefox add-ons which I use regularly: - NoScript: Let's you choose whether to allow javascript or not on a per-site bases, and either temprarily (for the session) or permanently. While browser such as Firefox are getting safer and safer with respect to javascripts, it's still nice to have an extra line of defense against annoyances and outright denial of service attacks. - Cookie Button: Handy menu to let you enable/disable temporarily/permanently cookies on the current website. (Again, related to privacy. Tracking cookies might not be dangerous per se, but some people just don't want to leave a browsing trail on their computer and willingly distribute it to advertisers.) - RefControl: Likewise, but for sending referrers. - Menu editor: Without this, Firefox's context menu would be larger than the height of my screen. - Tab Mix Plus: Essential Firefox plugin. I don't understand how anybody is willing to use the browser without it. Less frequently, but still sometimes useful: - Nuke Anything: Remove any page component via the context menu. (Doesn't remember removed elements, though, so it's only useful for removing large annoyances for the duration of viewing the page.) - Adblock: A bit like Nuke Anything, but it remembers and is oriented more towards banner ads.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Playing with bytecode interpreters, eh?-)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
It is possible for a program to read input from two mice independently, but AFAIK the program must have been specifically programmed to do this (there are a few casual games which have support for this). I don't know if the same is possible with two keyboards.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Mencoder can read tons of input container formats, but can it be used to write formats other than avi, eg. mkv?
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
I have always wondered why bits are used as the unit in these measurements. What possible info could it give that bytes couldn't? The only thing bits do is cause confusion. People don't measure file sizes in bits. They measure them in bytes. I have no idea what "103 megabits/s" means. I have to divide it by 8 to get a notion of how fast it really is. Telling it in bits is completely useless.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
I split the Jekyll video into two using mencoder. In Windows you could probably use VirtualDub for the same purpose.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
p0rtal_0f_rain wrote:
And should the download stop in the middle of nowhere, it picks itself back up without starting over - unlike the way Firefox and IE does.
I have actually always wondered why browsers don't support continuing interrupted downloads, even though it's a completely trivial thing to do, and basically every program which downloads things does that. Web browsers seem to be the only programs which don't. I can't understand why. What would be the problem? For example Firefox, based on Mozilla, based on Netscape, based on some older Mozilla or whatever, has been in development for something like 15 years, and it still has no support. (Well, that's actually not completely true. Firefox actually has partial support for continuing an interrupted download. For example, at least in the Linux version, if you shut down while Firefox is downloading a large file, the next time you restart it, it will continue the download from where it was interrupted. I don't understand, however, why it cannot do that if it's the internet connection that gets interrupted.)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
cwitty wrote:
Warp wrote:
If the algorithm enters an infinite loop and never finds a solution, it is broken. If the algorithm always finds a solution, then it's not broken.
A BFS that ignores history will still find the solution in the presence of loops in the graph (although it may do a lot of extra work). At each level, there is still a finite number of nodes, so it will eventually progress to the level containing the solution.
Now that you mention it, you are right. Checking for loops in a BFS would indeed save some work (and memory) but wouldn't stop it from finding the solution, as all the other branches are advancing in parallel as well.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
SXL wrote:
I converted "comics" to "any kind of comics, including mangas", because US comics are not very popular outside of the US.
I bet that Marvel and DC comics are much more popular than any manga in most countries.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
I have for long wondered the current trend of making all kinds of tutorials in video format, even tutorials which would be enormously more suitable in HTML form. Programming is a perfect example of something which would be much better taught in HTML form than in video. The reader can study the tutorial at his own pace, go back to check something he didn't understand, copy code from the web page to his text editor if needed, and so on. A HTML page also can link eg. to online reference manuals, have screenshots if necessary, and so on. What sense does it make to have a programming tutorial as a video?
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
CyberShadow wrote:
You're saying that an algorithm that enters an infinite loop is broken while an algorithm that takes zillions of years to run is not.
Isn't that rather obvious? If the algorithm enters an infinite loop and never finds a solution, it is broken. If the algorithm always finds a solution, then it's not broken. Computational complexity has nothing to do with whether an algorithm is correct or broken. I don't even understand what is it that you are arguing here. It's like you are comparing a working DFS implementation (one which eventually finds a solution) with a broken BFS implementation (one which may end up in an infinite loop). What exactly is it that you are claiming? That BFS always has the possibility of entering an infinite loop while DFS doesn't? Or something else?
The complexity of DFS (with no knowledge of other nodes) over a graph is exponentially proportional to the number of loops in the graph. For a complete graph with N nodes, the complexity is N!.
That depends on the algorithm in question. Some graph searching algorithms can be performed in linear time using DFS, others have higher computational complexities. For example the problem "find a path from node A to node B in an undirected graph" can be solved in linear time using DFS. (Why? Because you can raise a flag in each node every time you recess in the DFS so that you don't consider that node anymore. Thus you will perform at most n steps during the search, where n is the total amount of nodes.) Of course that's a rather simple problem. More complicated problems (such as "find the shortest path from node A to node B") may require more complex algorithms. It really depends on the nature of the problem.
Warp wrote:
If you are going to store all the found states anyways, then why use DFS at all? BFS finds the optimal solution faster. As I said, DFS which never discards its history has the worst of both worlds (at least when the task is to find an optimal solution).
I have already explained this when I first mentioned DFS. DFS over a search space with more nodes than available fast memory would probably be more cache-efficient than BFS, as it is more likely to have a much smaller "working set" (nodes it needs to look up). Even though it'd have to search deeper than BFS, its total execution time can be considerably faster.
That assumes that both algorithms perform approximately the same amount of work in total. However, DFS might perform enormously more work than BFS in certain cases.
I've begun rewriting my solver to use DFS + caching using splay trees, but since DFS vs. BFS performance seems to be such a hot topic I think I'll take a detour and implement caching first, then benchmark it on BFS and DFS.
Note that I'm not really advocating either one. I'm just pointing out their benefits and drawbacks. The most optimal solution would be, as mentioned, to devise some kind of heuristic which would cut off most of the search tree right away (even if the overall computational complexity remains the same).
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Maybe it's genuine, maybe it's staged (my bet it's that it's genuine, deducing from the other videos by the same guy), but in either case I think the guy is a genius. http://www.youtube.com/watch?v=YersIyzsOpc
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
I think the original poster meant "screen capture" or "screenshot" rather than "caption"? A caption is a piece of text, usually below an image (eg. on newspaper articles) or other contexts (eg. so-called closed captions). Maybe the title of the thread (and the original post) could be changed to avoid confusion?
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
arflech wrote:
Now I would like to know, what would likely happen when you try to compile and run this code?
int main(){for(int i=0,int *p=&i;p>0;*p=(int)p,p--);return 0;}
Segmentation fault.
Bisqwit wrote:
If the function prototype, which for printf, is defined in stdio.h, is missing, in C, it is assumed, for the function prototype, that it is like int func(...), which for most systems, and for most functions, including printf, works appropriately. In short, omitting the #include is not wrong.
That may be so de facto with most C compilers (for historical reasons). However, AFAIK the C standard requires functions to be declared before they can be called.
Swedishmartin wrote:
I just wanted to pop in and say: I still haven't learned pointers properly. For some reason they're significantly more difficult than anything else in C so far.
With C++ it becomes even more complicated. You can have pointers to class member functions (and if the function is virtual, dynamic binding is performed appropriately when the function is called through that pointer), and even to class member variables (in other words, you have a pointer of type "an int member variable inside class X", which you can then make to point to a certain int variable inside any object of type X, and the pointer will work for any object you use). Fortunately those are seldom, if ever, needed in practice.
But you can make pointers that point to functions, what does that accomplish?
There's an example in the standard C library: qsort(). (C++ offers a much more practical and efficient solution to the same problem, though.)
arflech wrote:
Oh, Derakon didn't mention that making functions that take "callback functions" as parameters is much less resource-intensive when you pass functions by reference, by making function pointers, because like all variables, if functions are not passed by reference, the whole function must be copied during the function call.
Functions cannot be passed by value, nor can be copied. C is not that kind of language. I think you wanted to use some other term there instead.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
I really think I'm a lot more comp nerd than sci nerd, but whatever.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
CyberShadow wrote:
This is turning into an argument of semantics.
It's certainly not about semantics.
You say that keeping the entire search tree in memory is an innate property of BFS
It is for a working BFS. Obviously if a BFS ends up in an infinite loop, it's not working. It's a buggy implementation. (There are problems where checking for loops is not needed due to the nature of the search space, but in most puzzle games where repeating the same positions can happen any time, that's certainly not the case.)
and not of DFS
A DFS only needs to store the current search branch in order to avoid infinite loops. It has no requirement to store the entire search tree for it to work. That was the whole original point: DFS can work with a minimal amount of memory (even if it takes a lot longer to find a solution). Thus storing all the states found so far is a requirement for BFS but not for DFS.
while I say that there is no discernible difference because there is no practical difference between infinite and astronomical execution time, so for a practical search over a graph you need to store the search tree for both.
If you are going to store all the found states anyways, then why use DFS at all? BFS finds the optimal solution faster. As I said, DFS which never discards its history has the worst of both worlds (at least when the task is to find an optimal solution).
Post subject: Re: Flashget to download from archive.org
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
p0rtal_0f_rain wrote:
I went to www.flashget.com and downloaded their free accelerator program where I had a faster download
I hope you understand how such "download accelerators" work (and why most of them are banned from many websites). Basically you are hogging the server's bandwidth at the expense of other people downloading from that server. Most high-traffic servers allocate their outbound bandwidth equally to all clients (as that's usually the default behavior for server software). "Accelerators" work by fooling the server into thinking that you are actually a dozen or so clients, by downloading the file many times, starting from different sections of the file being downloaded (or if you are downloading multiple files, downloading them in parallel). Thus if a server has, let's say 10 clients at the moment, and one of them is using a download accelerator which requests the file in, let's say 10 chunks, in parallel, the server now sees 20 clients. All the other 9 clients get 1/20th of the bandwidth, while the accelerator client gets 9/20th (ie. almost half) of the bandwidth. In other words, the accelerator is leeching the server at the expense of the other people, who are now getting a significantly slower download. It's no wonder that many servers ban download accelerators altogether.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Obviously you cannot perform a BFS if you don't store the paths which have been checked so far because, as you say, you would easily end up in an infinite loop. Rather obviously you need to keep the full search tree which has been constructed so far in order to avoid going to branches which would form a loop. The difference between BFS and DFS is that the former can also check if a parallel search branch has already checked some state, while a DFS cannot (unless it also keeps the entire search tree in memory and doesn't discard anything). (The other difference is, as mentioned, that with BFS the first found solution is also the optimal one.)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Nach wrote:
Sorry for the delay, it's been taken care of. Edit: Although despite setting it to 0, the statistics seem to be showing the old values...
The script is getting the 6 (for Bonk's Revenge) and 1337 (for this movie) from somewhere. IIRC it gets all of its data from the MySQL database and nowhere else. Are you sure you modified the correct entries? Is the script reading from the correct database (are there more than one)? I went ahead and commented out the rerecording stats from the movie statistics page, until the issue is resolved. It can be easily uncommented.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Cut people some slack. It's not all that obvious that the performance is some kind of parody... :P
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
CyberShadow wrote:
Warp wrote:
A breadth-first search can check that no search branch merges with another branch.
I don't understand - how would it do that, without knowledge of all visited states?
A breadth-first search has, by definition, knowledge of all visited states so far. It never tosses anything away. Every time you advance by one step in one branch, you can check that the overall state doesn't exist already in the search tree. A pure depth-first search has no such knowledge and can repeat the same search that it has done already in the past (the only thing the depth-first search can do is to check that it doesn't end up in a state which it already reached in its current path, which would effectively become an infinite loop). It is possible to implement a depth-first search which does not throw away visited states so that parallel search branches can check if some state has already been reached and avoid going there again. But then you need the RAM and you are in no way guaranteed that the first found solution is also the optimal, so it kind of has the worst of both search methods. (There are some situations where this is a viable approach though, eg. if you are calculating some kind of cartesian product on graphs or something.)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Kuwaga wrote:
That doesn't change the fact that they're also ridiculously good though.
The music maybe, the singing sounds awful.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Blublu wrote:
Warp wrote:
What is worse, when you find a solution, there's no way of knowing if it is the optimal solution (you can only know if it's better than the solutions you have found so far during the search). Basically you have to try all possible permutations before you can be sure that you have found the optimal solution. In typical puzzles that's an O(n!)-time approach, so for puzzles with just a couple of dozen states you may have to wait until the Sun explodes before your algorithm has checked all possible permutations.
Am I not understanding, or is this a slight exaggeration? You don't need to check *all* possible permutations. If there is at least one known solution, then the depth-first search need only go so deep. It doesn't need to find any permutations longer than the known solution. So after a certain number of steps, if a solution has not been reached, it can stop checking anymore in that direction because it will always be longer than a known solution anyway.
True, but the depth-first algorithm can potentially perform enormously more work than a breadth-first solution. For example, if the optimal solution is 10 steps long, but making the wrong choice at the beginning results in a million other solutions of about one million steps, the depth-first solution might make that wrong choice at first, go through the million suboptimal solutions before it comes back to the beginning and makes the correct choice and finds the 10-step solution. A breadth-first approach would quickly find the optimal solution. Another problem with the depth-first approach is that it might take (after a small preceding detour) the same path than a previous sub-search already checked, and it has no way of knowing that there's no solution there. So it will check that path again, to no avail. (In order to avoid this, it would be necessary to store in memory all the paths which have already been checked, which is what the depth-first solution was trying to avoid to begin with.) A breadth-first search can check that no search branch merges with another branch. This is not to say that the breadth-first approach is optimal and fast. Its problem is, of course, that it quickly consumes enormous amounts of memory if the optimal solution is even slightly long, and especially if the search space branches a lot. In the worst case the breadth-first approach cannot find the solution at all because it runs out of physical memory (while a depth-first approach could find the solution eventually). Both are brute force approaches, and thus have the same problem. OTOH with many puzzles only a brute force approach can be sure to find the absolutely optimal solution, and a faster heuristic approach cannot make that promise (iow. if the problem is provably NP). Sometimes there are ways to speed up even a brute force approach by taking advantage of how the puzzle works, which can often be used to reduce the constant factor of the computational complexity (even though it doesn't change the complexity itself).