Hacker Newsnew | past | comments | ask | show | jobs | submit | the4dpatrick's commentslogin

I can personally speak to this because during the times I was working long hours, I was avoiding some other aspects of a personal life and was using programming/work as an escape. This paints some of my perspective on seeing coworkers who are working these types of hours. I am unsure whether they are passionate or are avoiding other things.


Product Focused Engineer & Designer

Location: San Francisco, CA

Remote: YES

Willing to relocate: MAYBE

Technologies: React.js, Redux, Node.js, Ruby on Rails, React Native, Sass, Electron.js, Chrome extensions

Résumé/CV: http://patrickperey.com/wp-content/uploads/2016/11/Patrick-P...

Email: resume@patrickperey.com


Google and Facebook are not immortal beings infallible of mistakes. These two entities are companies comprised of people. At whatever level you look at inside these two organisations, you’ll see people making decisions. Whenever there are people involved there are bound to be some inefficiencies and mistakes made on the way. These inefficiencies and mistakes can be masked however because of the size of the organisation and the actual impact of a single decision made. Nonetheless a series of compounding “mistakes” could lead to these companies being open to other competitors.

A strength of these two companies lies in the amount of talented individuals in these organisations. Google and Facebook both have a reputation to hire the best and brightest. If this is the case, then we can assume they’d have the raw ability to make right decisions. (debatable point) Given enough time and resources these people could do almost anything. This can be illustrated by the number of moonshots being attempted at Google.

Another point of strength of these large companies is the ability to diversify. Diversification comes in handy when you’re in an ever changing world like we are in. Diversification comes in the form of external investments into potentially game changing industries and technologies (AR, BioTech, etc). Alternatively, change could come from within the organisations, albeit a little harder and riskier,

These are a few of the strengths of large companies like Google and Facebook. Despite these strengths new opportunities and new competitors will still arise. Google is a search engine, but it has moved into email and other markets. Google had existed before Facebook, and Google had more resources than Facebook when it got started. Why didn’t Google takeover social networking instead?

The OP posed the question about if Google will always be the default search engine it has acted as for the last two decades. In my opinion, this can change. With the advent of Amazon’s Echo and Siri, more and more searches are being conducted via these platforms. Both of these platforms use the Bing search engine. [1][2] If Bing did a better job [3], then it’s conceivable that more marketshare can be had by Bing. As human computer interface progresses, the act of visiting Google.com will become antiquated. This leads to opportunities for other search engines to gain adoption more or less transparently.

Facebook is a social network we access via our web browser and mobile device. If VR becomes what is promised and more widely adopted, then social networking as a category will become redefined. Facebook is at an advantage because of the Oculus acquisition, but the VR space is still so young; There are no real experts in VR.

TLDR; A series of “mistakes" made by these people comprised organisations could lead to true competitors. Strengths, like talented people and diversification, could counteract any risk to competitors. New platforms like Echo, Siri, and Cortana abstracts the use of the Bing Search Engine. VR could redefine social networking which opens more opportunities.

[1] https://www.quora.com/Can-I-change-Siris-search-engine [2] https://www.reddit.com/r/amazon/comments/2lsg9n/amazon_echo_... [3] http://thenextweb.com/gadgets/2015/07/08/alexa-y-u-no-answer...


^ Ditto


Could you give an example of what you mean by "tender-based software projects"?


The client writes spec and solicits a bid, instead of soliciting design partners.


Since you're apart of the target market, mind sending me an email? (email in profile)


I'd love to connect with you @montbonnot - my email is in my profile


> "I'm not sure how they call this psychological manipulation technique"

You could see this personal survey as a way for the site to make you doubt or question yourself subconsciously. For you to even read the question, you have to process that question introspectively. The more questions you answer, the more likely you'll find something you'll want to improve. Then miraculously this site is the silver bullet. Buy more sessions here [enter credit card info].

Another way you could see this product, the survey, and the upsell could be through the perspective of NLP (Neuro-linguistic programming). The two concepts to focus on would be pacing and leading. You listen to the relaxing ambient music and you become calm. (Pacing) Afterwards the survey pops up when you're in a more docile state. You answer these questions. Each of which is ever so slightly leading you to the conclusion that this product is the solution to the problems you've clicked "Yes" to.


Are you saying that the site uses specially crafted music to trick you into believing that specially crafted music can modify your mind?


> "We have to put in our best efforts and then give ourselves permission to let whatever happens to happen"

A friend once shared how surfing illustrates how having a process driven approach vs. a result driven approach is beneficial.

In surfing you start off knowing the basics of how to get on the water, standing up on the board, and riding the wave. You may know the basics, but until you actually go out in the water, you're not going to know about what will happen. The waves may not be the right size to your liking. You could be having an off day and keep falling. Or you may be having a great day on the water. All of this are factors to you actually enjoying surfing.

Instead of focusing on trying to catch a good wave or catching many waves, you can focus on the process that is surfing. This way you can make incremental improvements to how you surf. You'll then see each wave as a new opportunity to gather more experience/data for the next time you try. And you can replace "surfing" with entrepreneurship, science, and many other areas.

With this mindset, you'll be able to let whatever happens to happen. Life is too short to always be chasing after the end goal. From my experience, after you attain the goal, you'll always have another goal in mind. You will never be truly satisfied.


This is similar to rock climbing and skiing. The commonality that I see is that they all require taking what comes at you, a random natural environment, and handling it as well as you can. They require an outward focus and quick adaptability to varying conditions, which is not so much the case for other sports like road cycling, running, and anything on an artificial surface.


Too bad I'd much rather be skiing than doing any of the very real work I need to be doing.


Life is a journey, not a destination.


Well summarized






This is exactly it. Goals will point you in the right direction, but the joy is in the pursuit.


This is still my git workflow. Aside from the off times I have to rebase or revert a commit.

I'm curious as what git commands you've found the most valuable or you've used the most since digging deeper into git.


* Using stash to store stuff when I want to pull a remote in that will overwrite things I'm not ready to commit

* git add -p, git add -i are nicer ways to add files

* git grep

* git reset, revert, and checking out old commits

     - these commands I currently find tough to get right

     - this is mostly because I don't really get the HEAD~2 ^ and what the syntax is for accessing older stuff
* git fetch and merge -i instead of pull. I got burned by using pull a few times.

Most of this stuff is stuff that I've known about since I started using git but I was afraid to use it because I didn't really know how it worked and didn't want to "mess up". Since the previous comment describes 90% of what I need to do, there wasn't really any point to doing it any differently.

The biggest problem I have with git that I have yet to solve is that I will be working on something on my laptop and then want to switch to my desktop and pick up where I left off. This leaves the obnoxious necessity to commit for just syncing things instead of for actually finishing a feature. I don't want to do rebasing because I don't want to lose history. This is the main reason I don't think git is currently an ideal solution for me but since I have nothing better I'm stuck with it.

It needs a simple semantic interface and it needs the ability to "sync" in-between commits.


Try

    git checkout -b wip-syncing
    git commit -m "wip means work in progress"
    git push <whatevs> wip-syncing
on your laptop, then

    git fetch --all
    git checkout <whatevs>/wip-syncing -- .
    git push <whatevs> :wip-syncing
on the desktop. of course, this "rewrites history", but only in a very localized way.

In general, you're going to be fighting against git if you take an absolutist stance against rewriting history. Which is fine! But a little bit of controlled rewriting can open up a lot of options.

Edit: and I'm typing this from memory on my phone so please don't copy and paste the commands without verifying that they work correctly first!


not a bad idea to keep a separate branch for doing that. I might try that out.


> The biggest problem I have with git that I have yet to solve is that I will be working on something on my laptop and then want to switch to my desktop and pick up where I left off.

Could you use something like rsync or unison to sync the working directory (including the .git directory) between your desktop and laptop? I'm new to git myself, but after reading through the OP article I imagine this would work.


Yeah I've thought about rsync. It just seems like a half solution and I'm not really sure how well it would work when I'm off my home network. Sometimes I ssh into the desktop because my laptop is old and run into limitations with front end build tools.


a couple things helped me get into a comfortable flow w/ git: realizing git stash creates a commit (accessible via git reflog show stash). v helpful for managing interrupts, and gaining confidence you're not going to lose any work.

also, learning to be quick to create (and dispose of) branches, as they're just names.


> accessible via git reflog show stash

You can just do "git stash list".


IIRC, standard git clients don't show you the commit sha with 'git stash list', hence the extra few chars (easily skipped w an alias) are worthwhile. shrug


For some time now, we use the rebase workflow. (create your branch, do some work, rebase on master, push).

It is a great way to have a clean linear history.

But it makes git pull 'illegal' because it does a merge implicitely.

That's tipically something I didn't think about the first times I used git.


> It is a great way to have a clean linear history.

Why is this considered by so many people to be a Good Thing? Engineering is an inherently messy human process, and the repository history should reflect what actually happened. To that end, I've been advocating a merge-based workflow instead:

- The fundamental unit of code review is a branch. - Review feedback is incorporated as additional commits to the branch under review. - The verb used to commit to the trunk or other release series is 'merge --no-ff'.

Under that model, merges are very common, particularly merges from the trunk to the feature being developed. But that's OK, because its what actually happened. When most people perform a 'rebase', they are actually performing a merge, while dropping the metadata for that merge.


Before reading more about rebasing, I wouldn't have an opinion here, but like most things in programming I think it's a matter of philosophy. Do we want the history to be "record of what actually happened" or "story of how your project was made." [0]

I see merits in both approaches: Rebase seems to be good when you want to focus on the project minus the process, while merging seems to be good when you want to know the process behind the project. For larger projects with multiple contributors, I think the merging approach is better because of the process visibility. For smaller projects with one or two developers, a rebase approach could be "cleaner" when looking through the logs later on.

I'm interested to hear what other's opinion on the topic as well.

[0] - https://git-scm.com/book/en/v2/Git-Branching-Rebasing#Rebase...


It is an interessting analysis. I think you're right saying that its a matter of philosophy after all.

In my experience, the clean linear history can be important when you build a product which is going to be certified since the developement process is key to obtain the certification.

Also, I like the fact you can always reorganize your commits before rebasing, making them more atomic / cleaner.


    git pull --rebase
doesn't merge implicitly and

    git config --global pull.rebase true
will set that as the default `pull` behavior.


Note that

  git config --global pull.rebase true
was added in v1.7.9 - if you're using an earlier version of Git for whatever reason, the config you should be setting is

  git config --global branch.autosetuprebase always


Didn't know that. Thanks


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: