Using a content-type header instead of .json or .xml (etc) extension in the URL is just another example of this phenomenon. There's a reason people are moving to use dot extensions in URLs - it's easier to adopt and test.
Funny - when looking at a codebase for the first time, I do almost exactly the same thing as described by Mr Seibel: I start rewriting it. I rename functions or methods that I think have poorly chosen names, I rename the names of fields, variables or parameters for the same reason, I refactor, restructure, and reformat the code to look like I think it should look, and so on.
That sounds like it could be really beneficial to understanding a piece of code, but it seems like it would only ever be really feasible if you were working alone and taking some code from somewhere else and completely consuming it, into a new project like Toot and Whistle or into some other existing project. Most of the times that I've needed to ramp up understanding of some code is either at a new job or before contributing to some existing project.
Would you do this after starting at a new job, and make this your first commit? Or before contributing to open source?
I could envision some awkward social problems arising there. If you kept that code to yourself, but continued working on the old code, that would probably be frustrating.
I'm just curious because I'm really attracted to the idea of this method but am not sure if it would really work where I'd want it to.
As I think I mentioned in the interview, I've found that if I do this, by the time I'm done with my rewrite, I actually understand the original code too. So if I had to, I could throw away my new (better?) code and still benefit from a better understanding of the original code.
Funny, it's a thing I repressed myself doing, always wondering if static analysis (call graphs and such) wouldn't be better.
<sidenote>
There should be a site with a substantial piece of code to discover and people would answer what were the main (3-5) steps they had to do to ~understand it and how long.
I do this whenever I read complicated pieces of code. It's enormously helpful, even if I don't keep my changes around.
Some of the time the changes are broadly beneficial (like taking a multi-thousand-line file and adding some organizational structure) and it makes sense to commit them upstream. Some of the time the changes are personal preference and aid only in your own understanding of the code.
As with most things, the best approach is to use your judgement, not get too attached to your own changes, and to understand that context matters.
Same here. Sometimes I come across code that's not easily improved that way, and not just because it's fragile -- this seems less rare than, say, 15 years ago. Does it seem that way to the rest of you? That things are getting better?
While powershell is a valuable and interesting option, the problem with it is that it changes the basic metaphor.
For 40 years unix shells and their descendants and derivatives including cmd.exe have used files and text streams as the metaphor for interconnecting processes. Powershell changes that, and it means the output of one command goes to the input of another command as "objects".
This can be powerful, but it is also very disorienting. Which means it can be hard to learn to do even basic things in powershell, things that would take only a pipe or two and a couple unxutls programs in cmd.exe.
In cmd.exe, easy things are easy and hard things can be really hard. In powershell, hard things are hard (as opposed to "really hard") and easy things are hard.
I'm not a fan of powershell but I have to use it in my line of work.
The only thing that I find sucks is that the pipeline is slow as snails. For example, an svnadmin dump piped to a file which takes 8 mins in cmd.exe takes 14 hours in powershell...
The only thing that I find sucks is that the pipeline is slow as snails.
This is because CreateProcess in Windows is slow. It's the reason that run make on Cygwin on Windows for not-that-large Makefiles is really, really slow. The same Makefile on UNIX and Windows differ in startup time by a wide margin. It's really painful to type "make ..." and sit there for 30 seconds on a fast machine.
CreateProcess is only being called once in this case i.e. to spawn svnadmin. The exact script does the following:
svnadmin dump d:\repo > repo.dump
The output from svnadmin has a lot of lines. Due to the fact that PS is written on top of the CLR, it reads each line into an immutable string before writing it to a file. So for every line it has to create a new System.String object and as another poster said GC it later. Also as lines are not predictable length it has to buffer them resulting in more overhead.
Effectively where *NIX shells use a fixed size buffer for pipe operations and operate on streams, PS has to convert it to lines first before writing it out.
That doesn't work when you have approximately 25 bytes per line and a 12Gb file which is where the issue is.
I mean that from Microsoft's perspective. They apparently decided to put an object around something as simple and essential to performance as a line buffer. That's when you should have hired a system programmer to do that job.
Don't get me wrong, I actually like their approach in developing an OO-shell - but if it hurts performance that much someone has taken that paradigm too far. It's the typical case of someone with a hammer (OO programmer) trying to approach everything like it were nails.
perhaps the real solution to the performance issue you mention is to have svnadmin actually write to a target file instead of crossing process boundaries to redirect stdout ?
I do only easy things with Powershell and I found it much nicer than classical shells. It's more consistent in the command names and parameter passing. You also need to learn far fewer commands, because Powershell follows the Unix philosophy of small tools that do one thing well much more than Bash+Unix tools. For example if you do "ls" then you get a table where one of the columns is LastWriteTime. Want to sort by that column? "ls | sort lastwritetime". This works without hassle because ls returns a list of objects, and sort sorts a list of objects by a given property, instead of serializing everything to text that would need to be parsed first. I'm sure that Unixes' ls has an option built in to ls to sort by that column, but off the top of my head I have no idea what it is, and many commands that output lists of things do not have that option.
You even get autocompletion across commands: if you type "ls | sort _" where _ is the cursor, then you get a list of properties that you can sort objects returned by ls by (LastAccessTime, LastWriteTime, Extension, etc).
I disagree. All the time spent finding and learning new tools can be spent getting things done. I use bash, vi, grep, and man and I can get 95% of what I need done with just those.
Good for you. As long as we're making up numbers: I can get 100% of what I need done in 80% of the time as you, simply because I use superior tools that eliminate menial labour. Get out of your bubble.
> it means the output of one command goes to the input of another command as "objects".
Only if both commands support objects. If one of them doesn't then PowerShell just deals with text. I've sent the output of PowerShell commands to perl scripts and GNU utils without any problems. It just worked. I'm still a powershell newbie, and this has made it easy for me to learn it while still using what I know.
Which is nice, because you get fine grained control over which bandaids you put over cmd.exe. This basically gives you readline and not much else, and you don't have to install multiple possibly overlapping bandaids to get it.
Coders talk too much. They are in love with themselves.
Did you ever hear a cabinetmaker talk about all the considerations that went into the new cabinet he made, how he did the planing just right to hide that knot in the wood, or how he chose just the right species of wood, or how he had to feather the one cut because of a loose blade or the fact that the finish the customer asked for would expand the wood? Did you ever see a carpenter write 5 paragraphs on how much thought and consideration went into how he leveled the kitchen cabinet on such an uneven floor?
No. They know their jobs, they do their jobs, they appreciate and strive for excellence, and they go home at night.
Did you ever hear a doctor wax poetic about how fabulous a job they did excising the tumor from the patient's brain, how they brought in just the right amount of outside expertise, how they deliberated just long enough to be prudent and then took action at just the right time? How they balanced and weighed all the factors, the age of the patient, the seriousness of the tumor, the location, the likely disruption, the family support and post-operative therapy plan.
No. They do their jobs, they work hard, then they go home.
What is it with all these coders who cannot just do their jobs and do them well, and then shut up?
We GET IT. YOU'RE FABULOUS. YOU THINK IMPORTANT THOUGHTS ABOUT SOFTWARE AND INTERFACES AND SO ON.
Yes, programmers talk lots more, compared to doctors, carpenters etc. But, why is it a bad thing?
Programming profession, by nature, is changing much faster than say, carpentry. Every day there are new frameworks, languages and technologies popping up, so there is lots to talk about.
Also, programmers work with computers all day - it is easy to blog/tweet etc, when you are on the computer 10-12 hours a day. On the other hand, carpenters need to switch context, go to a computer, login to their blog etc - so many steps before they start typing their post.
I agree sometimes it is way too much noise, but may be it is not as bad as you make it sound?
Who here coded a solver into the game that used the quadratic formula? I tied it to a particular key and it would momentarily, almost imperceptibly flash a correct solution on the page after I selected an angle. heh heh.
That headline ought to read "Voyager reaches beginning of interstellar space". The interstellar space is the space between solar systems, so it would be impossible for the craft to reach "the end" unless it exited the Galaxy.
There's no clear line where our Sun's system ends though. Maybe the Heliopause, but Voyager ain't there yet.
The OP seems to be very willing to draw general conclusions when none are warranted.
When faced with a project in which code modularization was a problem, his proposed solution was to port the code? Seriously? Port Javascript to Actionscript because the .js was organized into too few modules?
People suggesting that Go is a potential replacement for Python, Lua or Ruby are missing the point. IMO, Go isn't designed to compete with those existing languages for existing opportunities.
The key opportunity in the future is smart devices everywhere. Embedded, connected intellgence, everywhere. Everything is a communication device. Today your phone and your car; tomorrow: Your shoes, your office, the grocery store, your refrigerator. Think of xbox Kinect-type sensors embedded into everything.
Writing solid C code for all those systems will be too hard. We also definitely do not want an serendipitously-designed language (Javascript). Yes, that leaves Python Ruby and so on, which brings us full circle. Go will compete with those languages but not in the domains that are evident today. Not in web browser, and not in a new! improved! web server. It seems to me that Go is a forward-looking design, aimed to meet the challenges of the everything-connected world of tomorrow.
To make tomorrow happen, we need a better C. Go is that.
This is pretty typical of the series. There are 3 or 4 drivers to a car for the Le Mans race (24 hours continuous racing). Two or 3 of those will be professional drivers, and one is a rich amateur who brings money. Pay-to-play.
How much you pay and how much you play is up for negotiation. Every team is different.
Two or 3 of those will be professional drivers, and one is a rich amateur who brings money. Pay-to-play.
Often, though, amateurs and pay drivers end up becoming respected professional racers (e.g. Schumacher). DHH's results are showing that if he chose to do so, he might be able to do the same.