Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: I made a Chrome extension that can automate any website (browserflow.app)
707 points by namukang on Nov 17, 2021 | hide | past | favorite | 209 comments



I'm a speech therapist who does teletherapy. I routinely fill out a lot of web forms. I've already created one automation that will save me time, and I have ideas for several more automations. Thank you so much for making this!


Oh man, this makes me super happy to hear. Enjoy, and feel free to reach out either by email (support@browserflow.app) or on the Discord if you need any help with your automations!


Interesting.. are you learning/using programming as part of your work or for your own interests? If not, I'm curious what value you feel you get from Hacker News; I know a lot of non-programming topics get discussed here, I just typically assume the audience here skews more towards people working in technology/software-related fields.


I tried teaching myself programming at one point, but it didn't really "click" for me. I browse HN for the non-programming topics. I steer around the "built with Rust" posts (and other programming motifs).

The value I get from Hacker News is the aggregation of interesting web content, good discussion, and learning about new apps like this one :)


If you'd be so kind... I'm helping a similarly standardized testing industry, how are you using this extension?

And DK: I didn't see it immediately - does an affiliate program for solution architects sound right / in the roadmap?


I hadn't considered it, but I'm happy to explore that option. Feel free to email me at dk@browserflow.app


The main thing I want to do with this extension is to speed up my documentation process. For every therapy session, I have to fill out a web form that details what happened during the session, including subjective info and objective data regarding the therapy goals. I already record the data in a Google Sheet, and it would be great to have a one-click solution to transfer that into the web form!

Right now, I have an automation for updating Zoom meeting info, and one that fills out the web form with standard info I include for nearly every session.


Some prior art:

"CoScripter" (2007) <https://blog.mozilla.org/labs/2007/09/coscripter/>

"IBM Automates Firefox With CoScripter" (2007) <https://www.informationweek.com/software/ibm-automates-firef...>

"Your personal assistant on the Web" (2010) <https://www.ibm.com/blogs/research/2010/10/your-personal-ass...>

"Koala: Capture, Share, Automate, Personalize Business Processes on the Web" (2007) <https://ofb.net/~tlau/research/papers/koala-chi07.pdf>

"CoScripter: Sharing ‘How-to’ Knowledge in the Enterprise" (2007) <https://ofb.net/~tlau/research/papers/leshed-group07.pdf>

"Here’s What I Did: Sharing and Reusing Web Activity with ActionShot" (2010) <https://ofb.net/~tlau/research/papers/p723-li.pdf>

Demo: <https://www.youtube.com/watch?v=lKIex_XAxWw>

Source code (bitrotted, of course): <https://github.com/jeffnichols-ibm/coscripter-extension>


aside from Browserflow are there really no similar contemporary projects/products? seems insane to me


There are many. A sample in alphabetical order - and by no means exhaustive:

https://apify.com/ (YC F1)

https://automatio.co/

https://axiom.ai/ (YC W21)

https://extension.dev/

https://www.progress.com/imacros

https://trysmartcuts.com (YC W21)

https://ui.vision/

https://wildfire.ai/

3 of these are YC companies. Reminds me of mixpanel vs segment vs amplitude, where YC are backing all the horses.

Each company is different in its particular approach and niche, though. This is inevitable with any large, or potentially large market.

Disclaimer: Co-founder of axiom.ai


Your website (axiom) needs to be better mobile-optimized. Elements be cut off by other elements. Irregular spacing, etc.

Doesn't look like a VC-backed app right now.

Compared to DK website, his looks way way more polished.

Just my two cents.


Agreed - we're running an experiment on Zapier integration and the callout mucked up the header. It's since been amended.

The homepage bears the scars of a lot of experiments. In pitches, I have had VCs complain about the website. I explain that we optimised for fast, messy iteration and learning. Then I explain what we've learned over X iterations. They've gone onto write cheques.

Still - it's messy and could do with a blank-slate refresh soon. We have new messaging to test.


Yeah I'd ditch the cheesy animation and just work on getting the basics down.


I'm working on a site right now. Can you tell me what would be the basics?


Find an existing theme or template that is popular and covers the main type of content you need. Then just change the content as needed. Don't build something from scratch just to be special


UiPath does both browser and desktop automation. Browserflow looks very similar. https://www.uipath.com/


Not really. It reminds me a lot of iMacros, even the UI

https://www.progress.com/imacros


The historically first browser automation tool was iMacros (2001!).

https://en.wikipedia.org/wiki/IMacros


What does bitrotted mean in this context?


Abandoned - last commit in 2013. That's typically so long ago that at least one Api, dependency or similar will be broken.


As a web scraper, I'll say that because he is hooking into the browser like a debugger / remotely controlled browser, just like Puppeteer would - he is instantly detected by the Cloudflare, PerimeterX, Datadome bot management solutions; and will get consistently banned on his page reload for literally any site caring about bots.

He'd be better off running some javascript on the page instead (a-la Tampermonkey, but can be done really nicely with some server-served TypeScript) to scrape the pages stealthily and perform actions.


Run it against https://bot.sannysoft.com/ to see how it stacks up

Most anti-Puppeteer tech analyzes the state of various browser Javascript objects, and if you run Puppeteer in headful mode with plugins like https://www.npmjs.com/package/puppeteer-extra-plugin-stealth you'll bypass most detection.


This is not true, run playwright/puppeteer with puppeteer-stealth + headful + plugins + fonts + OpenGL fingerprinting workarounds and you’ll still 100% be caught by PerimeterX and Datadome if the site’s sensitivity is set to anything but “Low”.

Talk with berstend (author of puppeteer-extra/stealth), join their Discord or read some of his Github comments and you will quickly get confirmation that none of those methods are good enough in 2021 (even with residential proxies or CGNAT 5G proxies).


Disabling headless and adding the following command line option: --disable-blink-features=AutomationControlled

is enough to pass all the tests above with cuprite (Ruby), without needing any extra plugins


This is simply not accurate, and you can easily test the claim. Just try running Browserflow on the sites you're thinking of and you can see for yourself whether it's instantly banned or not.


Can confirm, as someone who spent 2 years building software to beat recaptchas/bot management. I literally told DK that there was no way that Browserflow could solve the problems I spent years fighting against. I was wrong... it was humbling.


How exactly do these services detect Puppeteer?


They run JS tests such as the one linked in the peer comment: https://bot.sannysoft.com/


Not only that - enterprise bot management protections will run behavioral identification (e.g. how your mouse moves —> AI -> bot yes/no), TCP stack fingerprinting (and other devices if available e.g. gyroscope), TLS ClientHello fingerprinting (e.g. see https://github.com/salesforce/ja3), etc. Lots of very unique info in the Scraping Enthusiasts discord where lots of pro scrapers hang out.


I was on a project that used Google's Enterprise captcha v3 (passive mode, with all that "AI" jazz) and it was hot garbage. We tested against it using a simple selenium script and even though `navigator.webdriver` was true, it still gave 9/10 "likely a human".


Can you provide any guides on this? How will the server run the JS on their page automatically?


The easiest approach is be to use an extension like Tampermonkey, which can load (and reload) “scripts” from a web server. There are a few project templates on GitHub with Typescript+WebPack (e.g. https://github.com/xiaomingTang/template-ts-tampermonkey). You can automate with any of your favorite Typescript libs, from the comfort of your IDE, with hot reload included.. Pretty nifty, and projects can quickly get pretty big that way! I usually have one “script” that has broad permissions (e.g. all sites) with some form of router at the root of the code that branches to the different sites to evaluate.


Thanks!

From what I understand, this is only useful for doing scrapes manually by launching the target URL in a GUI Chrome instance? Or can this somehow work on a headless server? (I don't understand how one can automate this.)


This project seem to be consumer-friendly. You should mention to users that:

1. By sharing authentication cookies they provide full control over their personal account

2. By using this automation they often violate Terms of Services and may be banned.


With all of these apps, there's always the risk it eventually falls into the wrong hands.

I bought the pro version of a great scanning application years ago, only to have it eventually become malware as it exchanged hands over and over.


I'm guessing you mean CamScanner. Such a shame. Have you found a viable alternative?


I was CamScanner for me. Switched to Notebloc with Syncthing to get my scans out.


RIP

I've just used what's built into the OS these days. Not as good, but good enough post-covid where documents can be fully digitally executed.


Congrats, the demos look awesome! Having struggled with something like this in the past (for automated testing) I am always curious about how various solutions represent the "program" so its long term repeatable?

I often had to manually add branching on certain conditions (i.e login) or waiting for certain conditions (elements appearing) before proceeding.

I also often had to manually tweak selectors and even matches on html structurally (css selectors cannot select a parent element based on a matching content element).

Then there are the webpacked react sites that scramble css class names that change all the time.

Some of these things are super tedious to solve for even manually so I am just curious how no-code tools handle these?


Browserflow is more low-code than it is no-code since it has support for control flow statements like "If", "Else", etc. as well as being able to execute arbitrary Javascript on the page. The no-code approach of simply recording a flow works fine in many cases, but there are a lot of escape hatches if the flexibility is needed (e.g. waiting for an element to appear).

There's also support for a few unofficial pseudo-selectors (:contains and :has — see https://docs.browserflow.app/guides/selectors#writing-your-o...) to make selecting elements more reliable.

Hope that helps! Agreed that creating reliable automations for the Web is challenging and hopefully Browserflow will make it easier for many folks.


DK, congrats on the launch. You are onto something with data extraction. I like that you allow to correct the selectors, so that the data can be extracted more accurately, and typing.

I found the UI slight challenging because of a popup window that opens. Resizing is tricky. Overlay would make is so much easier.

Our team at Bardeen.ai has built a workflow automation tool. We also have a scraper to extract data. But our full automation engine (similar to Zapier/Integromat) allows to tap into the full power of web apps. Like creating a new Google Sheet / Airtable / Notion with the scraped data and then triggering follow up actions.

If you are curious, here is a “live build” that I did for the Twitter --> Airtable use case. https://www.bardeen.ai/posts/scrape-twitter

Jon mentioned in the other thread automated screenshots. We get screenshots of all of our dashboards from Google Analytics + Posthog sent to our Slack daily. https://www.bardeen.ai/playbooks/send-website-screenshots-in...

Either way, great job there! Love seeing new automation projects pop up.

P.S. - I saw there is an “input” action. Can I feed your automation tool a spreadsheet of data and have it fill out form? (one per row)


Thanks Renat! Bardeen looks slick and your team clearly put a ton of work into those integrations. Good stuff!

Not sure what you're referring to with the "input" action, but there's a "Loop Spreadsheet Rows" command that will let you perform a set of actions per row in a spreadsheet.


Hey Renat,

Bardeen looks super look, we are also building something very similar but very much focused on Web Automation, you can build cross platform automations on TexAu - https://texau.app

I’m thinking we should integrate with Bardeen and it will be open up so many more possibilities

~ Vikesh


Hi HN,

About 14 years ago, I fell in love with programming because it made me feel like a magician. I'd type in some incantations, click "Run", and voila! My lightning-powered thinking rock would do exactly as I commanded — in my case, make a virtual robot named Karel move around the screen in my computer science lab.

Nowadays, casting spells requires a bit more work. Most of our work happens inside complex web apps that each have their own custom spell books (APIs) — assuming they even provide one at all.

Let's take a task like managing your social media accounts. Suppose you want to reset your Twitter account and start from scratch. First step: Unfollow everyone. The problem is that you have hundreds of accounts to unfollow, and you don't exactly want to sit there all day clicking buttons.

If you're not a programmer, your options are limited to what others have built. You can hand over your credentials to all kinds of third-party apps and extensions in the hopes of finding one that works. Good luck.

If you're a programmer, you have more options. You have the power to cast spells. What if we used the official API?

You can sign up for a developer account, get an API key, download a client library, read about how OAuth works for the hundredth time, and then start digging through the API to find out how to complete your task.

That sounds tedious and creating a developer account for a one-off task feels like overkill. What if we simulated user actions in the browser instead?

You can install Selenium/Puppeteer/Playwright, read through its documentation to learn how to navigate and click, open the web inspector to figure out the right CSS selectors, run into some race condition where the elements aren't loading in time, sprinkle in some waits, and puzzle over how to handle elements being added from the infinite scrolling list.

That doesn't sounds too great either. Maybe it'd be faster to manually click those buttons after all...

I built Browserflow to automate tasks like this where people end up wasting time doing what computers do better. I wanted a tool that makes it easy for anyone, regardless of their technical background, to automate any task on any website. It includes the convenience of features like automatically generating automation steps by recording your actions while having the flexibility of letting you tweak anything manually or write custom Javascript. (If you want to see how simple it is to automate the Twitter task above using Browserflow, here's the demo: https://www.youtube.com/watch?v=UnsGTpcA-98)

Building a browser automation tool from scratch and making it robust enough to handle all kinds of websites took much, much longer than I expected. After working on it as a solo developer for 1.5 years, I'm super excited to finally share this with you all. Whether you're a professional developer or a non-technical marketer, I hope Browserflow will make you feel like a magician on the Web.

Would love to get your feedback!


Feedback: The on-boarding workflow is so, so good. So pleasant and well thought out. Hard to believe this is the launch... I'd expect folks to go through many iterations of it and not get to something like this. Really, really well done.

Whenever I see extensions like this, it does make me wonder something which I've never fully understood: Given that so much of it runs client-side, what's the state of the art nowadays in terms of preventing someone from essentially "stealing" your extension. I've come across some extensions in the past -- even ones that were paid for and part of a business -- where it was clear that it would've been trivial to simply copy the extension code and run it on my own. How do you prevent folks from doing that?


Oh, don't be fooled — many of my friends have had to go through much worse onboarding experiences to get to what it is right now. ;) Appreciate the kind words!

The two pieces that make it harder for someone to simply copy the extension code and run it themselves is (1) Javascript minification and (2) parts that talk to a server.

Someone can try to reverse engineer the calls to a server and run their own, but that's often more work than most people are willing to go through and it's made harder by the client code being minified.

I wish it were possible to open source extensions without running into this problem because I like sharing things I've built (https://github.com/dkthehuman/), but doing so effectively kills the possibility of building a business on top of it unless you maintain two code bases (one free and one paid) at which point it's more trouble than it's worth IMO.


This is a really excellent sales pitch, you answered all my “why not just” questions upfront. The video makes this look very smooth to use, excited to try it out next time I have a need.


If this could take screenshots, I would signup in a heartbeat.

Here is my need (and I've had this need my whole working career": What does production look like?

If a tool could automate loggin in, browsing specific flows, take screenshots of every page, and add them to a folder of the day, it would invaluable.


You're in luck! Browserflow can take screenshots and save them to Google Drive every day. :)

Take a look at the "Take Screenshot" command and feel free to message me on the Browserflow Discord if you need help.


> and save them to Google Drive

No, thank you. Please offer a local save option.


If you run the flow locally, it'll save locally.

If you run the flow in the cloud, it'll need some place to persist it so I've chosen Google Drive to start. If there's enough demand, I'd be open to storing it on Browserflow's servers and providing an API to access the files but I'd want to make sure that's something enough customers want before sinking my time into it.


Something to consider is to support an s3 compatible target [1], so a user could target AWS, Backblaze, their own minio instance, etc.

Of course don’t spend the cycles building until demand presents itself.

[1] https://developers.cloudflare.com/logs/get-started/enable-de...


I'm not (yet) a Browserflow user, but it's something that I'm going to either pitch or just skunk into my workflow. But when that time comes, I will absolutely need a first-party way to download screenshots. I'd be open to pulling them from an S3 bucket, but absolutely not from Google Drive.

That time is at least a few weeks off, though, probably after the new year.


Such a thing already exists. The ui.vision extension is roughly the same, but it runs locally (no cloud):

https://ui.vision/rpa/docs/selenium-ide/capturescreenshot


How much are you offering for this feature?


Do you think you would buy this if it does? Seems pricing is high


Jon, what's the main pain or goal with capturing a screenshot for every page that you visit?

It's tricky to do on your machine, so that performance doesn't suffer. At the end of the day, all full-page PDF generators would have to scroll to the end of the page, which would make it really tough for you to browse around.

A solution to this would be to just capture the URLs that you visit, and then do the screenshot generation in the cloud. The limitation is that none of your websites with logins would get captured.

Local storage is another issue for Chrome Extensions. There is a limit to how much data that can be stored.


I'm not a programmer so it would need to be easy enough to set up and maintain.

Logged in states are a must have.


Sikuli http://sikulix.com/ could perhaps be helpful.


Sikuli is good for desktop automation. For browser automation an extension based solution (such as this one) is easier to use.


How about a few lines of python with selenium?


It’s more than a few and less than a bushel.


You could also do most of this (except adding the screenshot to a folder, but you can get it via api if you need) and more with our free plan - https://www.rainforestqa.com/



I worked with a test automation system a while back that used ffmpeg to screenshot a headless browser. Similar approach might be workable on desktop.



What is the specific use case or value that you extract from saving those screenshots?


Too bad headless chrome seems uninterested in supporting browser extensions..."wontfix"

https://bugs.chromium.org/p/chromium/issues/detail?id=706008

Otherwise, you could make a pretty neat self-hosted "cloud" of this nice looking scraper extension.


I built Browserflow Cloud for that purpose ;)

It was a lot of work translating the extension code to work with headless browsers, but it means you can deploy your flows to the cloud and have it run automatically!


Yes, sure. Just that some use cases, like internal applications that aren't exposed to the internet, or dev instances of those that are, would be difficult.


What type of app did you have in mind?


automation/screenshotting of a bespoke internal ticketing system (aka not-Jira), but without needing to have my laptop on.


Recently a `--headless=chrome` flag was added to run headless mode using actual chrome browser code, which means extensions are now supported!


Sounds promising, thanks for sharing! "--headless=chrome" isn't google searchable, so I'm still looking for details.


Isn't the "chrome" value for "headless" parameter ignored? Maybe he simply meant the --headless switch (since Chrome 59) https://developers.google.com/web/updates/2017/04/headless-c...

EDIT: --headless is under kHeadless https://sourcegraph.com/search?q=context:global+repo:chromiu...


Oh, then it isn't helpful news. Headless does not support extensions.


I mean the CLI flag. I suppose it's called a switch. And yes, it does support extensions if you do --headless=chrome


Pretty cool and well presented.

I have concerned with any of the gallery items that advertise 'scraping' of Linkedin data. You might want to keep that on the down-low.

IMNAL but I'm pretty sure that's grounds for getting you shut down.


It may violate the TOS of Linkedin, causing them to shut you (and your users) out. Shutting them down though, I expect that's a way higher bar than some web scraping.


It evokes memories of the yt-download fiaso, in which the application was removed from Github for showing examples of downloading specific material from Youtube.


Really neat, that's the kind of stuff I always wanted someone to build. I think a marketplace of workflows would be a great next step, so that you can have someone else maintaining the flows.

I build tons of scraper and things that pretend to be browser (handcoded, not recorded from the browser - but lighter than spinning up a real browser) and the harder bit is keeping the flows maintained. Some websites are particularly annoying to work with because of random captchas jumping in your face but it's something you can handle by coding support for the captcha in the flow and presenting a real user the captcha.

One problem of logging in the cloud is IP checks. You may be asked to confirm.

If you want to look into this issues I'd recommend scraping yandex for dealing with captchas being thrown in your face and authed google or facebook for IP restrictions, weird authentication requests.

Again, I think a marketplace could outsource these problems to a community of developers maintaining flows.

Security could be another concern, but you always have the option of running things locally.


For sure! I'll definitely be exploring the marketplace idea. Currently, you can share flows and import flows that others have shared, but there isn't (yet) a nice way to discover ones others have made or charge for flows you've made.

Maintaining flows as sites change is definitely a drawback for any scraping solution, so I built features like generating selectors by pointing and clicking to make it as easy as possible.

Browserflow Cloud has built-in support for rotating proxies and solving CAPTCHAs to get around the issues you mentioned. (They're currently in private beta.)


I would love it if this extension was made for Firefox!


Unfortunately, Firefox has some technical limitations: https://news.ycombinator.com/item?id=29256938


Name two things that go better together: Firefox and not implementing standard APIs properly.

Probably "Chrome" and "stealing data" I guess.


Beautiful UI. Hope this takes off. Love the simplicity of the design and the ease of use. If it could make writing my Cypress UI tests easier, that might be another adjacent problem to look into, by logging the recorded elements as code.


Thanks! Right now you can use it for testing by throwing an error from the flow (using the Run Script command) if the result differs from what you're looking for (e.g. an element doesn't exist, some text doesn't match the expected output) and Browserflow will email you when the flow fails. It's currently not optimized for that use case though, so if it becomes popular, I'd probably look into adding more support for testing.

I've thought about adding support for generating a self-contained bundle of Javascript code from the flow that anyone can self-host and run, but that hasn't been a priority for the current use cases. Will keep it in mind — thanks for the suggestion.


I like the name. But how is this extension different from iMacros, Selenium IDE or UI.vision?

These extensions are some well-known browser automation tools, each with > 100K users.

Did you look at them and decided to do something different?


Yup, I've tried dozens of tools in this category, and here are some of the main differentiators:

- Ease of use: Most existing tools have pretty clunky UX and are hard to get started on

- Reliability: I've had issues with many tools simply not working, especially when it comes to more complex sites

- Cloud: Browserflow lets you deploy your flows to the cloud to run them on a schedule whereas many tools are local-only

Hope that helps!


I was skeptical at first, but your comment convinced me to take a look. It really is much easier and cleaner, and having it run in the cloud is amazing!

Nice job!


I’ve using iMacros, it’s not as easy as this one


So good - I came across your updates on Twitter and was blown away by how mature this product is already, especially at launch. Congratulations!

I’m also a user of your other Chrome app you developed (Intention) [1] - I’d recommend people try that out too (simple free app for productivity), it’s also so good.

Good luck with the launch!

[1] https://chrome.google.com/webstore/detail/intention-stop-min...


Thanks Rohan! Glad you're enjoying Intention and thanks for checking out Browserflow. :)


For free, the Max run frequency is once a day and the runs are 50/month, but it's imposible to reach it.


Neat dude! One tip - I'd add subtitles to your demo vid.


Neat!

I think your business plans are underpriced. You're saving human hours with increased accuracy, which probably is worth more than $25/month.

edit: nevermind. Your annual pricing shows per-month pricing which is $299/month.


I felt the entire opposite. There's no room to actually use the product well in a trial (free), and the next steps up are quite expensive.


I agree. There are a number of free open source tools, albeit with "worse" UX.

Although maybe he is aiming for people who are zero technical competence. I admit I would not have invested in Dropbox because I grew up with ftp.


Why should I pay for this if Automa is free open source and better?

https://github.com/Kholid060/automa


hmm, I installed extension, clicked on "Get Started", allowed permissions, and am then presented with a signup/signin page and no way to skip it.

I just want to run some macros locally. Not interested in "cloud" anything unless its on a server that I control.

Questions/comments:

1. Will local functionality work without signup?

2. If so, then please consider a "Get Started" flow that makes this clear and does not require signup before giving any instruction/usage.


hmm, I was going to experiment on this hackernews page, so I clicked on Extensions->BrowserFlow and it opened a window with a single button "Sign In". So it appears it does not work locally at all.

Question: Is there a technical reason for this, or you just want to insist that everyone create an account for some reason?

I do not see an obvious reason why macro recording and playback could not be performed purely locally. So I am skeptical that a server is needed, but always willing to learn and be astonished...

I would think that even if (big if) a server is needed, it could be done anonymously without need to collect email and create an account.

Anyway I am uninstalling the extension until such time as there is a local-only mode. too bad, from the demo it seems neat.


I made the decision to require user accounts to streamline the experience around saving flows (they're automatically synced to your account so you can share them between devices).

As a solo developer, I don't have the resources to support two versions of the product (one for users with accounts, one for users without) because it'd require maintaining multiple ways to save, load, and share flows.

Thanks!


Excellent work; I started the demo with usual jaded attitude, but then - WOW - nice - I'll have good uses for that!

Can you drive actions from a spreadsheet or table? The workflow I'm thinking of is first do a run to gather data, next offline filter or select desired items from that data set, then finally use the processed data set to drive a series of actions on the selected items.

Also, any chance there'll be a Firefox version?


You definitely can drive actions from a spreadsheet — it's one of the most common usage patterns.

Here's a demo showing it in action: https://docs.browserflow.app/tutorials/tutorial-scrape-a-lis...

(The demo shows scraping each URL but you can perform arbitrary actions for each row of the sheet.)

As for Firefox, it's unfortunately not currently possible: https://news.ycombinator.com/item?id=29256938


Thank you - looks really useful!

Interesting about Firefox vs Chrome API differences...

BTW, those are very nice demos too - clear and concise.


DK! Happy to see a new project from you. I'm a die-hard Hide Feed fan.

This reminds me fondly of using Vimperator macros and link hints to automate tasks back in 2007. It has always surprised me that the most widely-used UI paradigm lacks a standard way to automate tasks - we all spend so much of our lives clicking links, it seems insane that there isn't more/standard tooling for automating that.


Oh wow, you're a real one. Glad to see you here. :)

I know, right? HTML is the ultimate API and I also found it strange that there aren't tools around helping people work with it more effectively. Hence Browserflow!


Congratulations DK, this looks amazing!

We are building similar tool at TexAu and would love to catch-up with you sometime this or next week!

@iamvikeshtiwari on Twitter


I have somewhat of the same system involving selenium + a python scrape engine + jenkins + docker containers with headless chrome.


Care to share?


Damn dk! This is awesome I’m following you and browserflow sometime. This post should be on top for days straight atleast guys


Thanks for following along — I appreciate the support!


Cool. This reminds me of AppleScript. I miss those days. I'm surprised GNOME still doesn't have something that easy.


Presumably using Chrome DevTools Protocol? Or maybe that new Recorder panel stuff [1] in DevTools has unlocked some new capabilities for Extensions?

[1] https://twitter.com/JecelynYeen/status/1458089611004162060


Yup! All the automation is done via CDP and client-side Javascript.


Whilst nice, how is this going to handle the changing nature of the web? It's nice that it detects "lists" and such, but a few changes to CSS is going to trash that automation right?

I'm also fairly sure you'll break (either directly, or on a user's behalf) a few EULA's that really specifically ban scraping.


> but a few changes to CSS is going to trash that automation right

hence why it's nice to have that extension to click through the UI rather than figure out how to parse things no?


Didn't this case [0] set a precedence that "scraping is not against the law" irregardless of EULA?

[0] https://en.wikipedia.org/wiki/HiQ_Labs_v._LinkedIn


This might be true in the USA, but the EU has a thing called database rights[0]. Essentially, any collection of data can under certain circumstances be protected under database rights, which prevents other parties from copying (parts of) it. This originally was created to protect such things as phone books and other directories, but when I was a student (I don't remember the context anymore), they specifically warned us that scraping certain websites would violate their database rights, and thus be illegal. So using scrapers in the EU is something you should be very careful with, especially if your business depends on it.

[0] https://en.wikipedia.org/wiki/Database_right


So it was proven that it's not a criminal offence to scrap a website, but a website is still well within it's rights to ban you from doing so.


"using data that is publicly available"

If the user is logged in, that data may not be publicly available, and the EULA would still apply.


Pedantry: regardless


You pedantic piece of... nah I'm just kidding, Thank you. I actually learned English by watching Clint Eastwood, Charles Bronson and Sylvester Stallone movies, so my grammar might be slightly off from time to time, but google actually agrees with me when I say: irregardless == regardless.


https://en.wikipedia.org/wiki/Irregardless

https://www.merriam-webster.com/dictionary/irregardless

I dislike that word more than I dislike "nucular". Like diarrhea, anyone can let it slip.


Ah, so people've been making this mistake for over two hundred years but thanks to people like you, this misuse of language has been all but eradicated?

Radicated?

;)


That is quite the pun!

I tend to remind people who think that this is an error that, although I share their disliking...

- There is a case for the word and it predates us

- Languages are dynamic and today's "correct" spelling is yesterday's "erroneous" spelling.

I thought until recently that the spelling was "simply incorrect" until I found out there was more to it. It therefore is a reminder to myself as well.


This is awesome. Congratulations on the launch. I can see a number of use cases for Marketing, CS, and Sales orgs


Congrats, looks great, especially the UX.

Could you elaborate on cloud runs and cookies? E.g.:

- How are the cookies obtained? I saw that in the video you clicked the "add" button at 1:36, how does this work and what happens behind the scenes?

- How long do the cookies remain in use? Does the user have to refresh cookies manually at some point?


When you click the add button, the extension grabs the cookies for the specified domain from your desktop browser and attaches it to the flow to be used when running in the cloud.

The cookies are used for as long as the user keeps them in the cloud flow. (Browserflow doesn't try to be smart and automatically refresh the cookies on your behalf because there are scenarios like using multiple accounts in the same browser, etc.) Most major sites use quite long expiration dates for cookies (a year is fairly common) so there usually aren't issues with cookies becoming invalid for a while.


Are you aware of limitations around site support with this approach?

Several years ago I implemented a similar feature just reversed - A remote machine logs a user in, then passes the cookies that result from login to an extension running in the user's browser, which drops them into the browser's cookie jar.

Worked very nicely, right until you run it to log into a service like GMail.

Then Google correctly notes that you're using the same cookie in two different locations, assumes you've been session-jacked (and you have, really - you just did it willingly), and locks EVERYTHING. It took a notarized copy of my drivers license before they let me back in.


Might behoove the author to offer a proxy via the extension or whatever so that the cookie is generated and used on the same subnet.

I don't know how accurate Google can be, though, as I route Gmail traffic from the same cookie through three ISPs and a self hosted VPN, without refreshing.


i recall that browser would soon make it harder for extensions to grab cookies like that, does somebody know more?


Sounds similar to Microsoft Power Automate, though Power Automate works with browsers other than Chrome.


This is amazing!

I'm excited to see if this will remain available long-term or websites will try to figure out ways to block it, and the limit to data scraping from one script, for example if a twitter account had 1 mil followers could it do all of them in day. I'm going to try it out!


One of the major benefits of Browserflow automating your normal desktop browser (instead of creating a separate browser instance) is that it's indistinguishable from you using the site directly.

Of course, that doesn't stop websites from rate limiting you if you try to do too much too quickly. In general, I'd recommend being conservative and automating what you think a normal person would be capable of doing manually.


Looks great, I'll give it a try for my next scraping project. My favorite of all these types of tools was kimonolabs (http://kimonolabs.com) before they were acquired and shut down.


Loved Kimono


FYI you can run this tool locally, but it cannot save the results locally. It requires Google Drive to upload the data to a spreadsheet. File download/screenshot does support local execution though.

Can't schedule locally either, only in the cloud.


Congrats on the launch DK! It's awesome to see how polished you've made this.


Thanks Todd! Reflect has been a big inspiration for how smooth the user experience could be. Happy to see us both on the front page today! :)


So this is very specific, but I've been trying to automate clearing my linked in messages for a while and I just tried browserfow to no success. I tried using the selector and also grabbing css paths off random elements.

Any idea how to debug?


Come to the #help channel in the Browserflow Discord and we can work it out!

Link to the Discord is here: https://docs.browserflow.app/


Just curious, why do you offer residential ips? My initial thoughts are reseller botting, but it honestly seems kinda slow for that, not to mention max rate of 1/min. So what else are residential ips good for?


Yeah, Browserflow definitely wouldn't be able to compete with specialized bots for reselling in terms of speed.

I added support for residential IPs to handle sites that employ aggressive bot detection, but it's in private beta and so far there hasn't been much of a demand for it. If it turns out that it's not really needed for the use cases people have, I might just remove the feature!


>So what else are residential ips good for?

Chances are if you've thought of something you could scrape that would offer a broadly popular, real tangible benefit, they employ anti-bot measures that don't like non-residential ips.


I’m a bit confused with the Pricing page. Does the Chrome extension limit of 30 minutes per month in the free plan mean that completely local use on one’s computer is restricted by time? How is this tracked?


I was wondering that too. Paying for cloud usage is entirely reasonable, but there is no reason why there should be limits on what I do locally. Demanding a monthly subscription for local software is grubby—wouldn’t consider that.


Hello, my fellow engineers. :)

The reason for having limitations for local automations is a business one, not a technical one. I like the idea of scaling the price based on how much value Browserflow provides (if it saves someone hours of work, paying less than 20 bucks should be a no-brainer) so that's why there are runtime limits by plan.


I’d be happy to pay $19 once, just as I’ve purchased my other apps. I’d even pay twice that, and I would also consider it fair to charge for upgrades.

But I believe in paying for value, and the value of the software should be covered by a one-time payment, as is the case with purchasing any other product. Continuing to demand payment without providing additional value—a service, computing resources—is not reasonable or appropriate, and I can’t support that business model.


That's totally fine. I chose this model so that I can create a sustainable income stream to continually support and improve Browserflow, but you don't have to agree.

FWIW, I did consider the one-time payment and upgrade model, but it's not possible for Chrome extensions because the distribution is controlled by Google and all users are automatically upgraded to the latest version.

Cheers!


Thanks a lot for the prompt reply and clarification. I agree with the sentiments in the other sibling comments. At this point, this tool is not for me.


YES! I have been wanting a solution to automate some of my simpler web-scraping needs, that also need an element of human control. Thank you for making it so i didnt have to :) Cant wait to try it out


This looks really interesting!

Do you have any plans to port it to also run on Firefox?


I really wanted to make it run on Firefox as I do with all my other extensions (e.g. https://news.ycombinator.com/item?id=22936742), but Firefox lacks the APIs that Chrome has for automating the browser. :'(


which APIs are those?



interesting, thanks!


https://bugzilla.mozilla.org/show_bug.cgi?id=1316741 if you want to vote for this. Has been stalled for a long time.


like so many mozilla issues.


yeah, well, forced color scheme pickers don't just write themselves, you know


I'm going to try using this to update certs on my Brother printer. It is one of the few that I have been unable to automate/hack together something for LE cert rotation.


I was just thinking about how I wanted something exactly like this for certain tasks I need to do daily or weekly at work! does it have support for filling out file upload forms?


Hey Adam, it does! It was a pain in the butt to build so I'm especially proud of it. You can use the "Upload File" command to attach a file.

If you're running the flow locally, you can put in a local path. If you want to run it in the cloud, you'll need to first download the file and then upload it.


neat, I'll check it out. I wondered if it was painful to implement!


Why would I need to signup if I only ever want to run local flows?


Nice job! I am wondering if such plugin can automate across browser and desktop? Like the Imacro premium functions? You can even open docs, manipulate across multiple things.


Beautiful UI. What's the framework -- React? Vue? Angular?


I'm loving the nice comments about the UI because it's all good ol' Bootstrap. Tools in this category usually have pretty terrible design so I guess the bar is quite low. ;)

I'm using React for the Javascript!


This is so slick. I think I'll give it a try. Thanks!


Sincirely interested in: are there other examples of software running locally on my PC, but with work time limitation depending on subscription plan?


I definitely would have loved this in February when I had to upload 40 genomes to the NCBI, using the same format. Cheers and this looks awesome.


This looks really, really nice. I had my doubts going into it but the demo really blew that away.

I saw you can save to CSV, can it save to other file formats?


Glad to hear it!

Browserflow saves to Google Sheets which can be converted to other file formats as needed using other tools.


I’ve dabbled with lots of RPA software but never settled on one, mostly due to poor UX or the need to write code. Will give this a whirl!


Nice work! I've been trying a few similar tools. How do you handle auth (e.g. LinkedIn) when the work is running in the cloud?


I think in the video, he was getting the original cookies from the website, and reusing them in the cloud


Thanks! Auth is handled when running in the cloud by adding the relevant cookies to the cloud flow.


I tried it but somehow couldn't get the extension to populate the data in google sheet. Also the UI sort of feels convoluted.


Any plans to support HTTP authentication?

This is wonderful! It could save us from having to hire an administrative assistant to do manual scraping.


There aren't immediate plans because it hasn't been a common request, but I'm open to it if there's enough interest!

Feel free to email me at dk@browserflow.app to discuss your use case.


Thank you

You'll save me from arthritis and carpal tunnel


This is excellent… Congrats and super useful!

I will no longer write some pretty gnarly jQuery in console to do the same thing.


Can the extension handle logging in to a website? How can we store our user name and password securely?


If you run the flow locally, Browserflow uses your browser directly so you'll already be logged in.

If you run the flow in the cloud, you can log into a site by adding your cookies to the cloud flow.


Is there any plan to support login via username password? I currently have a use case for it


Looks like it copies your cookies once you’re logged in and uses that, which means it’ll probably not work after your login session expires.


Very nice! Just be careful, Twitter is pretty aggressive with banning accounts that use automation.


It's impressive. What's been the biggest challenge building it so far?


The biggest challenge has been getting over the "just one more feature..." trap and actually launching the thing. I still see so many areas for improvement, but it's nice to see that people appreciate how far it's come.

In terms of technical challenges, the biggest one by far has been creating flows that are reliable over time. Sites change all the time so having automation that's reliant on HTML structure, CSS classes, or pieces of text is inevitably going to be brittle, but I want to minimize how often flows break. Browserflow aims to generate flows that will remain stable for as long as possible, but there's certainly a lot of room for improvement.


Do you think this could be used for testing purposes as well, not just scraping?


For testing purposes, you can achieve similar results with Selenium IDE. You can export tests and upload these to (our website) https://testingbot.com - supports screenshots and scheduled runs.


It'll work, though it's not optimized for that use case.

Wrote more about it in this comment here: https://news.ycombinator.com/item?id=29256076


this is dope! I've been thinking about building something this for a few months now. Glad to see that I don't have to anymore! :D


Really great job with this. Definitely fills a need


Countless startups have been started on this idea


Tell me more


A couple of issues from where we stand:

"Save time by automating repetitive tasks in your own browser or in the cloud." My own browser? Our shop uses Firefox because we don't support Google's anti-competitive behavior or their support of Chinese ethnic cleansing.

"Web automation for everyone" Sure, except for the people you exclude from your club for not using Chrome and suckling Google's communist teat.


Can you compare with UiPath?


Congrats on the launch DK!!!


Impressive work :) #jealous


Well made. Nice work


that is a very well-made presentation, congrats


Excellent idea.


Very helpful for automating hiding “topics” on Twitter!


Can you please elaborate what you mean?


On my twitter feed, I keep seeing tweets from something twitter calls "topics". Apparently they were automatically assigned to me based on previous tweets I had liked. Didn't want to see any tweets from people I don't follow. And instead of having to unclick 200 topics or so, I used this app to automate removing them from my profile.


Neat. Love it.


i remember iMacros


Good luck! Is there a special price or discount for HN community?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: