Hacker Newsnew | past | comments | ask | show | jobs | submit | coolThingsFirst's commentslogin

This is very debatable. The courses look like they were recorded in the 90s.

The DB course particularly sticks out. My undergrad's DB course was fathoms harder than this. This is what you'd expect a highschooler should be able to learn through a tutorial not a university course.

If it doesn't talk about systems calls like mmap, locking and the design of the buffer pool manager, it's not a university Database course it's a SQL and ER modelling tutorial.


Respectfully, I think you should do more research.

The OMSCS program is well known and well respected in the tech industry. It's a masters degree from the currently 8th ranked computer science school in the U.S.

The university make no distinction between students who take the courses online, vs in person. I.e., the diploma's are identical.


I’ve taken graduate-level courses in databases, including one on DBMS implementations and another on large-scale distributed systems, and I also spent two summers at Google working on Cloud SQL and Spanner. Database research goes further than DBMS implementation research. There is a lot of research on schemas, data representation, logic, type systems, and more. It’s just like how programming language research goes beyond compilers research.

I don't think watching the lectures is the hurdle that anyone at OMSCS is trying to jump. The program has a pretty low graduation rate, and the tests are known to be fairly difficult, which essentially requires the student to do work outside of class or go to the resources available through GT to understand the material. I can look up the highest quality lectures on any subject on YouTube, it doesn't mean I will understand any of it without the proper legwork.

FWIW I meant the diploma is identical, the actual experience will obviously vary. Some people will get better outcomes online, some will get better outcomes in person.


Is this a common thing to have at university? I'm from one of top universities in Poland; our database courses never included anything more than basic SQL where cursors were the absolute end. Even at Masters.

Yes. It is. Your database course was apparently broken.

I can tell you something scarier.

My specialisation was databases there.

...

Do not worry, I do not work with databases in professional life as my main aspect. But I was not given a comprehensive education, and not even once there was a focus on anything more in depth. I came out without even knowing how databases work inside.

Naturally, I know what I could do - read a good book or go through open source projects, like Sqlite. But that knowledge was not was my uni gave me...

I am jealous of American/Canadian unis in this aspect.


DB is known to be a weaker offering.

https://www.omscentral.com/


Google has monopoly of places API, I tried foursquare and their public dataset and got tons of wrong locations of the places. It even got the location of the Eiffel tower wrong.

That’s why you have to make your own geocoder

How and how precise is the end data?

Hire fast and fire in 2 days like Elmo told them is the new playbook. Keep teams super lean, 20 devs max to keep Dropbox working. Question is why would a staff engineer that can create a new tech on the scale of complexity of Netflix, X etc even need a job.

Doesn't take too much IQ to create an LLC and buy a domain name.


In other words you have competitive advantage because your cloud costs will be 10x less.

This is exactly what 10 years of experience did for you. Why complain?


Smells like the next SBF case.

How on earth would gzipping larger amount of data be more efficient than gzipping smaller amount of data?

It's a question of entropy. Data is rarely truly random and for larger data there is a lot higher chance of having this "unrandomness" occur.

If your data consists of 4 kilobytes of just 00_01, then you gain a lot by just remembering:

  "write 00_01 2000 times".
Conversely, if the small amount of data is 00_01_00_01_00_01 then using the previous format would yield:

  "write 00_01 3 times"
As you can see, it does not nearly save as much space in comparison with the original data, hence it's less efficient to use the format. The specifics are highly dependent on the compression algorithmm used so take the example with a grain of salt, but I hope it gets the basic idea of why it can be more efficient across.

George Carlin put this very succintly: "it's a big club and you ain't in it".

Are you hiring remotely in Europe? Gonna blow your mind with my skills if you are.

It's all marketing, I can sell this to you and convert you.

Thing is it may have some interesting challenges, I too, wouldn't want to solve some insane string parsing problem with no interesting idea behind it. For today's problem, I did the naive version and it worked. The modular version created some issues with some corner cases.

There should be more events like AoC. Self-contained problems are very educational.


2 measly SQL injections and down goes 23andMe.

There was no SQL injection. The attack was basically the same as if someone stole the password to a friend's Facebook account, and proceeded to scrape the posts everyone else had made visible to that friend.

Some would say SNP data is more valuable than your posting history. I'm not so sure, since after all 23andMe went bankrupt trying to monetize their data and reddit didn't. It seems possible to me that a post where you say you do X is more useful to advertisers and political propagandists/spies, than a SNP which suggests you're 20% more likely to do X.


I am reading more on the vector of attack used on 23andme and it seems they used credentials from other data breaches. This never would have happend with MFA, even SMS confirmation would've been enough.

It's insane that a company that literally stores DNA data didn't have the most basic defenses against data breaches that would take an intern 15 minutes to read about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: