Hacker Newsnew | past | comments | ask | show | jobs | submit | edmorley's commentslogin

I was about to rule out Poetry due to pyup not supporting it, however it turns out Dependabot (which as a bonus looks to be more actively maintained than pyup) supports it:

https://dependabot.com/blog/announcing-poetry-support/


Another option similar to Poi, that I'd recommend adding to any comparison, is https://neutrino.js.org/


For more on the rationale behind this feature, see:

https://www.ietf.org/mail-archive/web/httpbisa/current/msg25...

https://bitsup.blogspot.co.uk/2016/05/cache-control-immutabl...

Rough summary:

> At Facebook, ... we've noticed that despite our nearly infinite expiration dates we see 10-20% of requests (depending on browser) for static resource being conditional revalidation. We believe this happens because UAs perform revalidation of requests if a user refreshes the page.

> A user who refreshes their Facebook page isn't looking for new versions of our _javascript_. Really they want updated content from our site. However UAs refresh all subresoruces of a page when the user refreshes a web page. This is designed to serve cases such as a weather site that says <img src="" ...

> Without an additional header, web sites are unable to control UA's behavior when the user uses the refresh button. UA's are rightfully hesitant in any solution that alters the long standing semantics of the refresh button (for example, not refreshing subresources).


What's the hurry to optimize away the revalidation requests when the user clicks reload? Is it just beancounter mindset about saving a few "304 Not modified" responses? In that case they shouldn't count the percentage of requests, but percentage of bandwidth or CPU seconds. Tiny responses are much cheaper with HTTP/2, so be sure to benchmark with that.


I'd be happier if my browser made fewer requests, however small they may be. Even if they make up a tiny percentage of my internet traffic, it all adds up. Making computers do less unnecessary work is a good thing.


People are notoriously poor at predicting what increases their happiness; see Daniel Gilbert's book[1].

[1] https://en.wikipedia.org/wiki/Stumbling_on_Happiness


At Facebook scale, the sum of all those "304 Not Modified" responses is probably a significant amount of resources.


I'm not sure it's a good argument to take up the biggest companies and then tally up effects of a micro improvement. You could argue for all kinds of complexity increasing changes resulting in %0.01 efficiency improvements this way.


At my company, 304s account for 3% of our CDN requests.


304 requests are so tiny that you probably end up in the order of 0.01%.


Very few entities operate at "facebook scale".


But they still account for a lot of the Web traffic.


Think dropped requests. If you're operating with 99% packet loss like many people around the world minimizing the absolute number of requests can have dramatic impact on load times.


You can't do anything with TCP or the web in a situation like that.


What do you mean? I've operated at 99% packet loss plenty of times in the rural parts of Vietnam. The key is you can't use any of the websites or apps that are popular in North America / Europe.


The likelihood of even opening a TCP connection is going to be very low. For phase 1, stacks typically send 5 SYNs before giving up. So 95% of your TCP connection estabilishments will slowly fail due to timeout before you even get to phase 1 of the 3-stage connection estabilishment. To say nothing of actually transmitting or receiving any payload data successfully. Which you would have to many times to open a web page.


I understand their rationale for it, but I don't think it should be implemented. Having something immutable like this allows it to end up being used for tracking purposes. Just add a <script src="trackingcookie.js" /> that calls a function with the cookie and all the sudden there's yet another place to covertly store an ID for tracking a user.


But you're removing HTTP requests, so how can it make tracking easier? Any ID that a company puts in their immutable content can also be put in their normal content. The change doesn't make tracking any easier than it already is.


It makes it another place that they can store the tracking cookie and have your browser give it back out to them. Deleting the cookies or other such things wouldn't remove the tracking ID as long as the browser continues to use the cached immutable resource. Similar to how this[1] works by using multiple storage methods.

https://github.com/samyk/evercookie


This is certainly an interesting technique, however in this case surely git-reflog would have been easier/resulted in not losing code comments etc?


He presumably used checkout on a file with uncommitted changes.


When connecting between different tube lines, much of the time users remain inside the ticketed area of the station, so only tap their Oyster/contactless card when they pass through the entrance/exit barriers at the very start and end of the journey. (Though there are a few stations where there is not a direct tunnel between lines, and travellers are required to connect via a separate street-level entrance, where they would tap their card mid-journey.)

In cases where there are multiple routes to complete a trip (eg remaining on one line vs making multiple connections for a faster journey) it was therefore previously not possible to determine what percentage of people chose which route.


I'm glad that post explicitly called them out on the deception:

> The representatives of WoSign and StartCom denied and continued to deny both of these allegations until sufficient data was collected to demonstrate that both allegations were correct. The levels of deception demonstrated by representatives of the combined company have led to Mozilla’s decision to distrust future certificates chaining up to the currently-included WoSign and StartCom root certificates.

Contrast this to WoSign's announcement:

> WoSign also made a careful investigation of these issues and issued a report on these issues, some problems have been clarified, and all problems are found in the first time and fixed. WoSign actively cooperate with the investigation and communication to guarantee the issued SSL certificate will not be affected in any way.

(https://www.wosign.com/english/News/announcement_about_Mozil...)


This makes a big difference to small projects using the $7 hobby dynos, where the $20 of the SSL Endpoint add-on made Heroku less attractive than other options.

I'm interested to know how performance compares to the add-on, which uses a dedicated ELB per app (which is why it cost $20). On the one hand I would imagine switching to this new feature removes the need to pre-warm the endpoint (https://devcenter.heroku.com/articles/ssl-endpoint#performan...), but on the other could presumably introduce noisy neighbour issues.

"we will be rolling out exciting new features to it over the coming months" ...native Let's Encrypt support perhaps? :-)


Noscript causes this for some reason


I do have NoScript, but I don't know what NoScript feature would force this page to httpS.


Probably the HTTPS enforcement: https://noscript.net/faq#qa6_3


Thanks. Wow, can NoScript be unpredictable.

1) Forbid active web content unless it comes from a secure (HTTPS) connection would, I assume, block the active elements, not redirect the entire page to HTTPS.

2) I had it set as:

Forbid active web content unless it comes from a secure (HTTPS) connection = Never

You'd think that would disable it, but according the question just above the one you linked, Never apparently means Always unless the site is whitelisted:

----

Open NoScript Options|Advanced|HTTPS|Behavior, click under Forbid active web content unless it comes from a secure (HTTPS) connection and choose one among:

1. Never - every site matching your whitelist gets allowed to run active content.

2. When using a proxy (recommended with Tor) - only whitelisted sites which are being served through HTTPS are allowed when coming through a proxy. This way, even if an evil node in your proxy chain manages to spoof a site in your whitelist, it won't be allowed to run active content anyway.

3. Always - no page loaded by a plain HTTP or FTP connection is allowed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: