this post was submitted on 20 Jul 2023
0 points (NaN% liked)

Firefox

12 readers
284 users here now

The latest news and developments on Firefox and Mozilla, a global non-profit that strives to promote openness, innovation and opportunity on the web.

You can subscribe to this community from any Kbin or Lemmy instance:

Related

Rules

While we are not an official Mozilla community, we have adopted the Mozilla Community Participation Guidelines as far as it can be applied to a bin.

Rules

  1. Always be civil and respectful
    Don't be toxic, hostile, or a troll, especially towards Mozilla employees. This includes gratuitous use of profanity.

  2. Don't be a bigot
    No form of bigotry will be tolerated.

  3. Don't post security compromising suggestions
    If you do, include an obvious and clear warning.

  4. Don't post conspiracy theories
    Especially ones about nefarious intentions or funding. If you're concerned: Ask. Please don’t fuel conspiracy thinking here. Don’t try to spread FUD, especially against reliable privacy-enhancing software. Extraordinary claims require extraordinary evidence. Show credible sources.

  5. Don't accuse others of shilling
    Send honest concerns to the moderators and/or admins, and we will investigate.

  6. Do not remove your help posts after they receive replies
    Half the point of asking questions in a public sub is so that everyone can benefit from the answers—which is impossible if you go deleting everything behind yourself once you've gotten yours.

founded 1 year ago
MODERATORS
 

I've got a couple hundred GB to download with Google Takeout, so selected the 50GB file sizes but unfortunately the browser crashes at ~46GB. It actually crashes the whole machine (MacOS) with activity monitor showing firefox "using" 46GB of memory.

Is there some weird niche problem I'm running into here? I'd expect firefox to just be streaming the download into its .part file, so keeping the 46GB in memory is odd.

Is there some way to mimic the firefox download with all cookies as a wget/curl? Dev tools let you copy anything in the network console as a curl request, but since this goes straight to the download I don't think the console sees it.

Honestly any ideas on how to move forward would be appreciated.

Edit:

I ended up using an extension called cliget that does all the "copy as wget" work for me. I added a -c to the wget so I could use the partially downloaded 40GB file and went from there. I think copying the download link would have worked because it's from some random domain and probably uses a jwt-like auth protocol, but it's unclear whether it would deny a wget without correct user-agent or other headers. YMMV

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

You could copy a different GET request from the same domain and then grab the URL for the download and replace the one in the GET request with that one

[–] [email protected] 1 points 1 year ago

That's a neat trick, good call! I ended up using an extension called cliget which seems functional so far.