Hacker Newsnew | past | comments | ask | show | jobs | submit | jack57's commentslogin

Sorry for the double posts. An app I was using caused it.


This complexity, my technically oriented friends, is part of the reason normal people do not use Linux.


How do you even come up to this conclusion? Knowing the magic key sequence is not required to operate a Linux system at all.

If a Windows box froze, and you had a (somewhat slim) chance of gracefully shutting it down, would you not use it?

Would you call pressing F8 to access magical boot options in Windows, a reason non-technical people wouldn't use Windows?


I found the handwritten Mark Zuckerberg signature a little pretentious. Maybe that's because of The Social Network, but I don't feel like other CEO's are using their own brand like that. It's like a fashion designer or even an artist signing their work... My video was not THAT good.


I think a lot of people would find this very useful. Could you post the link on this thread along with submitting it on HN?


There have been some changes on the API, I need to look onto it.


Thanks for the additional information. It was difficult to decipher your problems from the original article. I'm guessing the "no variability in load" processes are where AWS was the most unnecessary.


Yes! 95% of our services were running at AWS. Paying AWS for running our website is not a good use of money. Many cheaper ways to host a website.

Paying for 70-100 hours of compute and having the server crash in the middle of calculating your predictive analytics not exactly all that bright either. So, we bought our own gear.

AWS works and worked with us pretty damn closely as we pulled apps out of the cloud.


I'm curious to know if you considered other hosting providers as an alternative to building out your own hardware/datacenters? I think it's reasonably well known that AWS is around 3 times more expensive than alternatives where you pay by-the-month instead of by-the-minute. Was a Rackspace/Linode/whoever implementation costed against a buy-your-own-boxes solution, and if so, is there anything you could share about why you chose the way you did?

(Oh, and thanks for the information you've already shared - even if you can't answer my curiosity here…)


We use other cloud services as well. Rackspace and Nimbix are a couple of them. We even looked at Azure figuring no contention for boxes but we don't have any MSFT in our stack (sorry to my whole neighborhood of MSFT employees). We never depend on just one and have a detailed cost breakdown before we decided to buy or move a service. The ROI needs to be there.

As an aside, AWS has everyone beat when it comes to regions however. We can be close to our customers in Europe, US and so on.

To date, no one is spinning up cloud fronts and services in more areas than AWS. It will take the MSFT , IBMs and the like to move the global cloud along. MSFT just needs to realize not everyone wants Sharepoint, SQLServer and .net.

Which service you use is really situational. Plenty of good ones out there. AWS is just one!


AWS tends to pick locations based on cost rather than their proximity to major Internet hubs, so the number of physical locations they actually have for most of their services is deceptive. They only have 9 available regions, and their latency is going to be much higher than most other providers that are located right by major Internet hubs instead in the middle of nowhere. We're a company (dedicated hosting provider) with only a team of 5 people, and we already have colo in 7 locations, which has them handily beat in North America and matched in Europe. There are also several VPS providers and resellers who simply rent dedicated servers from multiple companies like us with more locations than AWS. I'm looking at the website of one of our clients now who has 20 meaningful locations. Akamai absolutely destroys Cloud Front and everyone else in terms of presence, and a number of other CDNs have them handily beat. Neither AWS nor Cloud Front are anywhere near being leaders in terms of meaningful geographical (i.e. network) presence.


I'm very curious about your ROI analysis for colo vs. a service like Rackspace's "Managed Colocation" [1] where a hosting provider runs the DC and provides the hardware and your folks manage the OS layer and up.

I run technical operations for a company with a fraction of your footprint (but growing quickly). We're at the point where we are growing out of the RAX public cloud but by my calculations, the decision to run our own private cloud in colocation vs. lease one from a service provider is (financially) a wash.

From a practicality standpoint, the scales tip towards leasing bare metal from a provider. I'm curious to hear your experiences with colo. How many folks do you have working in your colocation facilities doing hardware maintenance? What about network engineering? I presume you also keep a sizable stock of spares?

1: http://www.rackspace.com/managed_hosting/managed_colocation/


All good questions. The answer is pretty long winded. In short, we love our bare metal and you are not going to be able to rip it out of our cold dead hands. It is stable, fast and cost efficient.

Now for the long winded:

1. We do have a decent amount of spares but not a ton. Our contracts require replacement parts within hours to a day. Some items like F5 gear we have two and no spare. They just replace their gear in hours.

2. Each data center has 24/7 support that can do some minor tasks.

3. Yes, Networking is a pain and you need the right people to do it. It is not cheap either! Luckily our VP of Tech Ops is a networking guru.You mess up networking and you are hosed. Our first networking guy wasn't exactly Tops! So, we know first hand.

Having said all of that, for us there are economies of scale. It is the case if we want to test different machines, databases or any other combination at scale it could cost us several hundred thousand dollars just to run the tests. Yep, we have dropped over 100k for testing at scale. Its simply not sustainable and an irresponsible way to spend investors cash. Also, when you add in multiple environments for dev, test, staging and integration you can quickly see we consume a lot of boxes. So, many in fact, a lot of colocation/cloud services will not work with us unless we plop down large amounts of cash. Let's also factor in AWS wants large up front spend for reserved instances. Thus, if you need the capital to get the amount of compute you need to run your business it is not very hard just to call Dell, Cisco, Nimbix, Equinix or any other vendor and negotiate our own deals. If any of these companies can get half our spend a year they are willing to at least talk.


This is an awesome, simple idea. There's something to be said about simplicity. I wish I had an idea like that!


Wouldn't it be nicer if the domain names were divided alphabetically into 26 labelled pages? Right now it's 29 and difficult to browse.


I'm in the same boat. This is pretty frustrating considering I'm paying for this special GPU instance by the hour.


Are you aware of any support boards or anything like that on which we could get resolution?


http://render.otoy.com/forum/viewforum.php?f=70 I started a thread when I could not find the AMI (honestly confusing that you must launch it through the AWS marketplace outside of the console), but now I've updated it with my current problem. Perhaps we should start a new thread for this issue.



Has anyone had any luck? I've set everything up and followed their instructions, but the URL they give you to access the web interface keeps timing out for me.


"This guy is a hardcore Linux geek converted into hardcore Mac-fanboy (polar opposite)"

I disagree that these are polar opposites. I have been a Windows guy most of my life, but after switching to Linux, I have learned to appreciate the merits of the UNIX basis of OSX.


> I disagree that these are polar opposites

What I mean by that statement (which seemingly wasn't completely obvious) was that he went from a "I want to be in control of and tinker with absolutely everything"-stance (Gentoo, enough said) to "I want everything working, done, out of the box"-stance (Macs).

I'd argue that is very much polar opposites.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: