I started this project while struggling with slow Selenium tests in CI. There are a lot of hosted Selenium services, but none geared towards parallelism; this fills that niche.
Nice! We've struggled with slow selenium for a long time and have tried various ways to fake parallelism. This sounds like it could be what we need! Thanks for your work!
That's great to hear! I have some other ideas to improve the performance and reliability of Selenium, but I want to focus on the problems causing the most pain. If you want to chat, I'm at len@browsertron.com
Looks great. My only thinking is how I would interpret the pricing.
It is $0.003 per second. So if a single test took 10 seconds, it would cost £0.03. It doesn't say if that's per second per test, or per second of parallel usage.
If for example, I have 250 tests each taking an average of 10 seconds. Is that 2,500 seconds - $7.50 per test suite run. If not, surely I can't run 250 tests in parallel for a total of 10 seconds.
I see how that could be confusing; I'll add clarity around pricing. Currently it mirrors the way serverless computing is priced - per second of compute time per instance. So, your second example is correct.
I started with $0.003 per second, because it aligns with existing cloud-hosted grids. An example pricing model is ~$200/month for 2 parallel instances with 1000 min. Assuming unlimited parallelism, you get $0.00333 per second. 20+ parallel browsers on existing services are at least a couple thousand every month.
I don't think remote grids are the best solution for every test performance problem, and for some test loads, local runs can be faster. That said, pricing weighs into the remote vs local decision, so I really appreciate the feedback.
I've been using this for a couple of weeks, and it works really (surprisingly!) well. Would have loved better examples to get started, but all I had to do was replace the connection string.
That's good feedback! Cross browser testing is important for many companies (my past companies included), so if you find them valuable, the question becomes how often they should run. They tend to be expensive and slow, so they're often only run once a day or week, while running a quick pass with Chrome or Firefox much more often (on every pull request if possible). Browsertron is built for running your tests in CI against every build. The development loop is when you want the fastest feedback, so that's what it optimizes for.
Changing where your tests run should be as easy as swapping out the connection string, so you could run really quickly against this, then swap out the string for your cross browser tests.
I think this merits a blog post on cross browser testing strategies, thanks!