Hacker Newsnew | past | comments | ask | show | jobs | submit | v13inc's commentslogin

There is potentially a big legal difference between tinkering on your side projects at work, compared to carving out time for them at home. Unless you have a specific exemption in your contract, then your employer owns all your work that you do in the office or on their equipment.


Do these corporite politics apply to a company the size of, say, segment.io? How about Dropbox? Apple?

I have a hunch that they type of politics you encounter will vary widely based on the size of the company.

Edit: scare quotes were not needed


I would say it is probably present everywhere, but in different forms and to different degrees. My hunch is that it is most present in companies whose core business is not technology (software/hardware), but it's just a hunch.


SEEKING WORK:

I'm a full-stack freelancer with experience managing, developing and deploying projects in PHP (Symfony, Drupal), Python (Django, Flask) and Javascript (Node.js, Express, AngularJS, React). My specialty is working with front-end build systems and automated deployment for single page apps.

I'm located in Victoria, BC, Canada, and am available for daily, weekly or project-based work.

Sean Clark

sean@v13inc.com

http://v13inc.com


I'm sorry for coming across as a dick, but the idea that you "wasted" time learning a framework that became obsolete is silly to me, and it seems to be a common sentiment.

Ask yourself: at the time you used prototype.js, did it save time on the project?

If you answered Yes, then it was never a waste of time. Knowing prototype.js AND jQuery makes you a better developer: you learned the hard way that abusing prototypes can lead to hard-to-understand code. That can only be a good thing!


No, you're not being a dick. You're right — it definitely made me a better developer. To my memory I didn't switch because of a particular lack of a feature or features, but because jquery was being maintained more consistently.

Community support makes such a big difference, and now that I'm further down the line I'm averse to having to basically make a bet up front.


Well, there's opportunity cost to consider.


Those problems are best solved with Engineering best practices and culture, in my opinion. Each tough / innovative problem is probably somewhat unique for your startup, and picking a solution (a front-end framework) before you even know the problem limits your ability to solve it creativily.

Most programmers are good enough that with a good refactoring culture, they can evolve the equivalent of an in-house framework. Although, for some reason, I gather that programmers are scared of in-house "frameworks". I think that attitude is short-sighted since the app you build on top of the framework will end up being more complex than the framework itself.


It goes both ways. By rolling your own framework, you inevitably end up reinventing the wheel and solving problems that have already been solved. For each feature you need, you either have to create your own solution, or manually integrate a bunch of smaller libraries. On the other hand, committing to an established framework means you have to work around issues that the framework was not designed to solve.

I wouldn't dismiss using an established framework as "short-sighted". It's a tradeoff: the more complex and unique your problems are, the more it makes sense to roll your own.


I agree there. Deciding on tooling for a long term project is a very tough balancing act.

Although I am a bit afraid that people overestimate the costs of rolling your own code, or "re-inventing the wheel". In most cases you aren't reinventing the wheel, because there are well documented bodies of reference for the design of almost any wheel you could need. Building (writing) a wheel (code) from scratch against a spec is much, much less complicated than inventing it.

Likewise: assembling your own set of design patterns and writing code from scratch is not "re-inventing", and is a lot easier than we give it credit for.


Yeah, that's completely fair. I generally work on projects with constantly evolving requirements, so I tend to roll my own framework(s) by gluing together existing libraries that each solve a specific problem very well. That approach works well for me because most of the time I simply don't know the long-term implications of using an existing framework for any given project, so it's easier for me to evolve my own as I go. But I think there are a lot of projects out there that benefit greatly from the ecosystem behind certain frameworks (Rails comes to mind) and don't run into many bottlenecks due to said frameworks. For them, assembling a foundation is totally unnecessary because there's an open source framework that provides exactly what they need.

I don't have enough experience in different types of environments to say which approach is most suitable in most cases, but I'll definitely say that using an existing framework is the safer path (you have a community to lean back on), and is also advantageous for hiring. So I think you're correct when you say that many developers are afraid of rolling their own frameworks, but I think there are good reasons for that, especially for quickly-growing startups.


I couldn't have said it better myself :)

One hard lesson I learned is that you can't bet on a front end framework having the same mindshare for very long. The churn can get pretty crazy, and in my mind this nudges the needle a bit towards rolling your own for long term projects. Especially if you can offload the complex parts of the arch to the lower-churn backend world.


By rolling your own framework, you inevitably end up reinventing the wheel and solving problems that have already been solved.

You don't have to roll your own framework. You could always just use the micro-libraries that are ubiquitous in JS and pick an architecture that best fits your application. shrug to each their own. :)


Isn't that basically rolling your own framework? :) A "framework" doesn't have to be a huge 100k-LOC library--it can just be a set of conventions and design patterns with some code to enforce them--but you always need some kind of consistent structure in your application if you want it to be at all maintainable.


No, because a framework tells you where to put your code. It will say "put a handlebars file in application.hbs, this is the default, or you can override the default and load it manually" or something to this effect.

So a framework has that "convention over configuration" flavor, while libraries are explicit. You actually have to load the application.hbs file manually with a handlebars parser. Then you use another library for the router, etc.


Maybe we have different definitions for what a framework is, but I strongly disagree with the notion that frameworks have to be implicit and magical. Libraries solve specific problems; frameworks help you structure your code. That doesn't mean that your framework needs to automatically load files named a certain way, or magically call certain methods; it can just be a set of conventions that are optionally enforced by code.

I can't imagine the spaghetti that would result from not using any framework (even a tiny handmade one) and just throwing a bunch of libraries together.


What? By your definition object orientation is a "framework" because it "helps you structure code" and is "a set of conventions that are optionally enforced by code". That's not a framework, that's a paradigm!

If a framework doesn't do something implicitly it's just a large library. If it's a set of conventions not backed by baked-in logic, it's a style guide.

A framework must CALL YOU. It usually gives you a piece of code that loads itself and lets you customize what it does by passing your code/configs to it. Then you tell it to run with what you gave it. The parts of the framework that you call yourself are actually "plug-ins" or basically framework-specific libraries.

If the framework never calls your code and you only call into the framework, that's always just a library. I would argue that actually it's easier to conflate a very full-featured library with a microframework because both really kind of call your code (especially when it's in the form of closures or a DSL).

You would never accidentally call a framework a library, though, because it's obvious that it's handling things for you. It's running everything behind the scenes and you just kind of advise it to do the things you want.


A framework must CALL YOU. It usually gives you a piece of code that loads itself and lets you customize what it does by passing your code/configs to it. Then you tell it to run with what you gave it. The parts of the framework that you call yourself are actually "plug-ins" or basically framework-specific libraries.

I really like this description. I've been trying to come up with a better description of what a framework is and isn't and I kept falling short. This one works well. Thanks! :)


Isn't that basically rolling your own framework? :)

shrug Personally, I don't see it that way. Frameworks are more generalized and reusable. They tend to be so large because they have to take into account a wider degree of problems. Applications with custom architectures and some external libraries are very specific and not typically reusable. Maybe it's just a matter of degrees.


> the more complex and unique your problems are, the more it makes sense to roll your own.

Of course, everyone thinks their problems are complex and unique.


How many data points do you need to uniquely represent every person in earth? I bet its a smaller number then the number of constraints and requirements in your system.


What is wrong with leaving everyone high and dry? The CEO has no qualms about doing that to devs.

If an engineer leaving puts the company in a bad spot, then they are understaffed and need better management that can foresee and handle those transitions before they happen.

It's all part of proper engineering culture.


This is what happens when one person stops communicating to another due to lack of awareness; the other assumes the worst, and stops reciprocating. They are the only one who knows what's going on, yet it almost always comes down to them saying "Why should I be the first to re-establish communication?". You should do it because you're an adult, and the other person isn't a mind-reader; if you don't, you don't deserve any sympathy when the shit hits the fan.


I think even before you bring responsibility into the picture, there's a lot for each actor in this situation to gain from reestablishing communications(I certainly can't see what they would lose, except for face). This is assuming the other party is mature and rational though.


I think the truly important question is: Are those differences caused biology or society?


If it is society, how should it be handled? Should schools educate girls to take more risk while boys are instructed on the negative aspect of risk taking?

Or to phrase the question in a different way. If risk behavior is a socially learned behavior, and risk behavior is the cause of gender imbalance in SV entrepreneurs, what would the ethical action be to create equal opportunity for people of both genders?


> Should schools educate girls to take more risk while boys are instructed on the negative aspect of risk taking?

Let's ask the question another way: if schools (and parents) are currently educating boys to take more risk (or educating girls to take less risk), should they start treating both sexes the same way?


I have heard of no school that has a program to increase risk taking in boys, nor do I know of any teacher education that instruct teachers to teach boys in taking more risk. Could you explain why you are suggesting that some do?

What schools could do is educate about actually risks, so that any personal or cultural level of risk aversion is confronted with reality. That would be treating children of both genders equally and could mitigate differences between genders.


> I have heard of no school that has a program to increase risk taking in boys

Those biases are subtextual, but very powerful. There's a nice recent article demonstrating some of them: http://www.slate.com/articles/technology/bitwise/2014/12/wom...


Inculcating risk could be risky... I'd rather sensitize boys/men to risk than make girls/women risk tolerant.

Risk tolerance leads boys and some girls to take stupid risks from adolescence to? Well, for boys, over-representation in correctional institutions.

If one could just get all to avoid stupid risks but take other more acceptable risks, I suppose but can they be decoupled?

This is not to say that oppotunities should not be there for all to consider and take, if they so desire... just making risk in general more acceptable can have unintended consequences --unless there is a way to direct the risk energy into productive avenues (i.e more job risk tolerance but less risk of engaging in dangerous activities..


It's far worse on low-memory Android, in my opinion. iOS apps have to support being suspended and resumed (although bad apps still do it badly). This forces apps to deal with this eventuality on day 1.

Android, on the other hand, doesn't force this. All apps assume they can do whatever they want in the background, without being shut down. This means, on my low-memory Android, that opening an intense browsing session followed by Google Maps kills all of my background chat apps, making me unavailable. This is far more unacceptable in my opinion then having to wait for an app to reload.

(I would agree that iOS is too aggressive when it comes to memory management, though)

Edit: Formatting


I agree with you there, I was only thinking of flagship android devices because they are in the same class add the iPhone. I've never owned a lower end android phone, but I imagine that they don't scale downward as well as iOS would.


That almost looks like it was built by a mechanical engineer, not an electrical engineer. All the effort seems to be focused on building a sturdy case for the batteries, and the actual electrical connections are all steel screw terminals and crimping, without any spot welding anywhere.

I'd be worried about the electrical resistance of all those contacts, and the heat it produces. Tesla's battery pack seems like a more intelligent electrical design, with a barebones mechanical design to back it up.


Having a mechanical engineer drive the design of a battery pack isn't necessarily a bad thing, IMO it's the sensible route for a mass-market car made by a mass-market company.

Modern high volume, quality focused manufacturing will often prioritize ease of assembly over elegance in design. Fewer, simpler steps make for fewer defects and greater product consistency. It's the whole "lean manufacturing" ideal at play, done right it will lower costs and improve quality to a point that you can splurge on a little engineering elegance such as a small-volume hybrid model capable of being assembled on the same line as your high-volume offerings. Tesla doesn't have to worry about this for now as they compete in a high-margin segment of the industry and they are trying to set a benchmark of excellence that will create demand for their products.

A real world example I've seen in person is the Nissan Leaf. The battery is designed to be as safe as possible for handling by a line worker and that allows Nissan to produce the Leaf at the exact same time as plain vanilla Altimas are rolling down the line. If you ever take a tour of their Smyrna, TN plant you'll see 1 line with a 9 or 10 Altimas interspersed with a Leaf every once in a while. It's smart engineering at a production-level viewpoint as a "worse" battery allows an entirely different product to be produced with minimal production overhead.


At these elevated voltages the currents are manageable. If this were a 24V pack then your concerns would carry far more weight.

The biggest issue is vibration loosening a connection in the longer term but there are plenty of time-tested ways to avoid that from happening.

Even fork lift battery packs (which can carry immense currents) are still built using the crimp terminal/bolt through method.


Why can't you just verify that the whole chain is SHA-1 instead of using the expiration date as a heuristic?


Because then everything will seem fine until 2017 at which point all the sites break at once. Using the expiration date makes it gradual and shows problems when certificate updates are tested.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: