-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ubuntu package #52
Comments
Or, alternatively, package the Linux version using Flatpak or AppImage. |
there's currently a flatpak package of this on flathub - although a native one from source code would be better. As per flathub policy, if you'd like to be responsible, as the developer you'd be more than welcome. |
I added a shell script that creates the appimage on a fork. It essentially depends on 'convert' (for the icon) and wget which are available on most systems I expect. It isn't very properly written (so I will not make a pull request of this in the current state). Someone could probably easily take those steps and write it in a way that it is more properly integrated into the zotero build system. From the top of my head I know of canoe as another project using npm that builds appimages as releases. (Gruntfile? I know very little about this system in general, sorry) Maybe one can see how it is built there. |
@retorquere maintains a repo of zotero debs. |
PPA packaging is a mess and is hard to automate. The only benefit is that it's an easier route to get into Ubuntu, but they also want from-source builds, so see former point. My scripts would be simple to adapt for packaging as part of the build process -- and they'd be stripped pretty far down as they'd not have to detect new versions being available, they wouldn't need to be for both Zotero and Juris-M, etc. I'd be happy to relinquish hosting the The only real thing that would remain is where to host the binaries -- I host them in github releases, but hosting them on S3 would be simpler still. The most important driver for the decision on where to host them is whether you want download stats I think. GH stats are not super great, I don't know about S3. SourceForge is also super easy to set up, and has better download stats (and it's allowed to only use them as a bin hosting service), but SF isn't as popular with the OSS crowd as they once were. Bintray seemed like the most logical choice but it was finicky to set up, and I hit upload limits during the first two weeks of testing, and I didn't want to be hobbled by the risk that I'd need to put out a release but couldn't. |
@retorquere aside: Your various contributions to Zotero, especially BBT have saved days of my life, thank you. Also, thanks for the high-speed explanation of debs, Emiliano! I wonder if we should be avoiding some of the friction that you identified by packaging it the way Ubuntu seems to want pre-built apps distributed these days, i.e. Snap packages. This process I think is not dissimilar to the flatpak builds, which one can also run on Ubuntu. Snapcraft explicitly supports pre-built-binaries and seems to include hosting, so that sidesteps the hosting and build questions. It might have other frictions, however. I'm not sure if people are have strong commitments to having both formats/pros and cons? |
I have no strong opinion on snaps -- I tend to opt for debs when available but that's probably just because I'm used to them. To be clear, debs support pre-build binaries, it's just that PPAs/Debian build rules don't allow them (which is why e.g. Oracle Java is packaged as a shell script that downloads and installs Java during .deb installation). Snap tools are likely to be more pleasant to work with, and I say this not because I know them but because I can't imagine them being equally frustrating as the .deb tools. It took me ages to find out what was in the end very simple; there's a thousand ways to package debs, and a lot of the docs and tutorials tend to
It is not a pleasant process, but once done, it's set and forget. Because I still want to verify I'm releasing the right thing I'm releasing "manually" but that's really just a commit of an auto-updated config file (and that would be skipped for a real Zotero build, because the check I do is just whether it indeed grabbed the correct version); Travis takes care of the actual build & repo update. One upside of debs vs snaps is that Chromebook users can install those. Other than that, anything that reliably gets me the latest version of Zotero is fine by me. I have no preference one way or the other. |
I did have at one point a setup that used a fork of zotero-standalone-build to build a PPA-compatible package from scratch, but the PPA build infra blocks network access during build, so you can't fetch the firefox binary as part of the build. Since you can't fetch binaries during build, and you're not allowed to have them as "source", PPAs/Debian mainline will just not work. The same goes for anything that relies on Self-hosted debs don't have this problem, and neither do snaps/flatpaks. |
I can convert (simplify, mainly) my existing script for https://github.com/retorquere/zotero-deb, but I'd need to know
(edit: downside of a simple repo is that there's no multiple versions for separate distro-versions, but Zotero doesn't have this in any case) |
Not Debian-based, but we could use Docker for that part.
Files are available in
I'm not sure what that means.
The repo in this case is…the thing that gets checked for updates? What form does that take? We tend to keep update manifests on actual webservers in case we need to do conditional updates for any reason (e.g., not serving an update to a particular set of clients). Downloadable files would go on S3 with the rest of the binaries. But I can handle adding the upload stuff as long as I know what files need to go where. Thanks for working on this! |
But they wouldn't be in the Docker env, right? The scripts inside docker could just fetch the tarball as I do now.
It means that given a Zotero arch/version, there's only one .deb, not one for each Debian/Ubuntu/whatnot release. In a full repo, you can have one version for Ubuntu 18.10, one for 18.04, etc, but I tried setting that up and it was a major PITA. Given that Zotero only has a single build per arch/version, I don't see any benefit in trying.
Correct, and where the debs are downloaded from
Any web server that supports https and that allows files to be downloaded directly (although 301/302 redirects are OK) with a GET;
will work as long as
etc will work; eg, a simple website hosted in an S3 bucket will work You can see a full list of assets at https://github.com/retorquere/zotero-deb/releases/tag/apt-get for a simple repo; obviously, a Zotero repo would not host Juris-M binaries. I'd still be maintaining the repo for JM binaries until Frank decides he wants to host them himself, but I'd strip Zotero from the repo.
A particular set of clients as in OS-dependent? Or another discriminator?
A redirect to S3 would be OK (technically the assets on GH releases are hosted from S3 buckets, and you get sent there by a redirect), but simple repos (and as far as I can tell, full repos) require all URLs to sit on the same base; you can't have just https://hostname1/Packages and https://hostname2/zotero_5.0.73-1_amd64.deb (the |
No, but we could easily specify the dist directory as a mounted volume. While we could run this as a separate step after the normal tarball upload, there's no real reason to redownload and extract the same files.
Right.
OS version or current Zotero version, mainly. Though I doubt we'd care about distro/version here, and presumably we wouldn't get the Zotero version anyway. So this probably doesn't really matter. (Basically, this has been useful in the past for things like not serving Zotero above a certain version to users on old versions of macOS, or serving a special manual-upgrade message to clients where the updater was broken.) |
Does the build process build both 32 and 64 bit? To finalize the repo you need all packages in place. |
It's possible to run them one by one no issue, but
|
Can I detect in the |
(other than by inspecting the ELF header of |
After the |
I hate to say this, but this is possible using a full repo. You can have varying versions per target distro. It was a major PITA to get even halfway though, I didn't get it to work last time, and the docs on the process are not set up for easy digestion. Tutorials abound, many of them outdated and conflicting.
Broken updaters are out of scope for debs -- Zotero-updating is disabled in the debs because they're installed as root, updates would come from the dep repo.
Ah but that script has |
That's OK. There's essentially no chance we'd worry about old distro compatibility the way we've had to worry about macOS and Windows compatibility.
Right. If we do decide to split things up and use redirects to S3 for the debs, I'll deal with it. For now we can assume they'll just go to the same place. |
You have no idea what a relief this is 😆
That's cool. Then it's mostly done -- at the edit: I see now, it is indeed "and" |
Alright, the current script can be started after |
"the current script" being #73 |
Resurrecting this in light of https://forums.zotero.org/discussion/comment/394321/#Comment_394321 What's the current feasibility of just integrating the .deb build step into our Linux build script? We run the build script for Linux releases manually in a Linux (but non-Debian-based) environment. |
As long as either the tarball or the layout pre-tar is available, it can create the package. The main question is whether it is desired to retain older debs. When a new deb is added, the old debs must be present to rebuild the index. I do that now by downloading the release when a new build is needed. Another question is version management. De debs follow the zotero release of course, but if a problem is found in the packaging itself, I add an extra sub-minor component to keep the zotero release number while still promoting apt to upgrade. These sub-minors are semiautomatic. I can explain this when we have the infra in place. I'm in the last stages of a major simplification of the build script, which should be done in the weekend. |
A debian environment is required but that is easily done in a github action (which is where it runs now). |
Yes, our packaging scripts do something similar to generate binary diffs. There's a cache folder of recent downloads, and if they're missing it will download them based on an index of recent release versions and a specified number of versions to support. But what's the practical upshot in this case? These are the older versions that people can install manually by passing a different version number to dpkg/apt? And if they're not present those versions become no longer available for manual installs?
We'd have to do it in Docker. We do tests via CI but not builds. |
Correct. My assumption is that people might sometimes want to downgrade for diagnosis / emergencies. But if not present during the repo build, they'd be unavailable to apt; if the debs are available elsewhere they could be installed using dpkg.
Docker would work. |
OK, happy to look at whatever you have after you're done working on it. Would be nice to make these part of the official release process. |
That depends under which URL the files are available. If eg a cloudfront redirect/proxy were put in place, it would just mean the cloudfront redirect would be changed, not the URL, and that would be seamless for the user. If the URL changes, action is needed from the user to change along.
that depends on the environment in which the script is ran. I have my current repo refresh broken up in two parts:
at which point part 1 detects that changes have been made and updates the hosting. If this is to be ran from the Zotero build, you'd only need part 2, and that is ready-to-run with some configuration only. If the Zotero build environment takes care of part 1, that should be all. Another way to set this up would be to use the gh cli to just do |
It did turn out that I had just misconfigured b2 + cloudfront, so it should be possible for me to set up a reliable repo that way with minimal cost through the bandwidth alliance. |
I'm seeing if I can get the build script to run via Docker. |
You will need to have a gpg key in the docker environment to sign the packages (and have it listed in |
Does anything actually check the signature? Also, what's the deal with this in config.ini?
|
Looks like Release.gpg is maybe checked by apt? I assume that's created here using the same configured key? Are the .deb signatures actually checked? |
the
In the case a problem is found in the packaging process itself after the debs have already been made available, I need to be able to generate a new package-version from the same zotero-version. In the case of
correct.
Yep, by |
I think I have the B2 hosting through the bandwith alliance set up correctly now, so I could in principle switch the hosting to that, and it would be at no cost. I'm keeping the B2 repo fresh right now, I am not yet announcing it because I don't want to have the users switch too often, but the sourceforge mirror system isn't great -- every few days the automated tests fail because of it without any changes on my end, and that means users are hitting this too. GH releases are stable but require a patched client that certainly ESR users won't have. B2/S3 solves these issues. BTW, I don't know how much hosting runs for Zotero, but if it's substantial, the bandwidth alliance could be worth looking into. |
Meaning you can use CloudFlare's CDN on their free plan and not pay for egress from B2? I couldn't find anything on the CloudFlare pricing page that actually discussed data transfer limits… I still need to set up GPG but we can probably take this over. Before we commit to this, how unusual is what we're doing here? How likely is it that something's going to change in Ubuntu that breaks upgrading from existing installs? This issue aside, that's happened at least once before, right? I pretty much only ever use yum-based distros, and custom yum repos are fairly common, but I'm not clear on whether this is as straightforward as those. |
That's my understanding, yes, and since on B2 you don't pay for ingress, the only charged actions are authentication and listing the contents of the bucket, but for me that falls withing the free daily allotment.
I've been looking for a while but haven't found any small print. The reason I was getting charged earlier was that for each rebuild, I was downloading the repo outside the bandwith alliance path, and while I was developing this I did that frequently.
Cool.
This repo setup has been in Debian since the beginning as far as I can tell, but outside the Ubuntu's PPA system there hasn't really been a strong culture of outside repo's in Debian, so documentation on it was pretty blurry. What has changed is where it is recommended the public key is stored on the users system. I don't see that changing anymore since each repo now uses its own keychain, the use of the single keychain was the (potential) problem. The current install.sh simply overwrites the keychain, so after they've switched to the zotero-hosted repo, my public key will be gone from their system.
I haven't looked into yum repos but was actually thinking about creating RPM packages. But apt repos aren't all that hard in the end. The storage of the key for all (including the official) repos was an issue, but that's no longer in the install.sh |
Any news? |
Any news on this? |
If it's unlikely that this is happening soon(ish), I need to finish setting up the b2-based repo and move everything to that. Sourceforge is frustratingly spotty, and github runs into a debian problem that's not going to be solved quickly; nothing in debian happens quickly. I'd rather not have the users have to move another time when the Zotero repo is live, but sourceforge is unworkable. |
I'm going to prep migration to b2. I really can't wait any longer. It's disruptive to these users, and it weighs on me I don't have a solution for them. |
What environment does the build run in? Not debian-based, but something else then? Maybe I can get the required tools to work in the actual build environment. I may not need to get familiar with docker. |
OK, took some work, but I've got what appears to be a successfully signed repo via Docker. Need to do some more refinement and testing, but I think this will work. More to come. |
Can you explain rebuild.py? Do I need that, or can I just hard-code |
Other questions:
|
|
|
I used
This is the output of
I'm trying again with Seems like the best we can do is hard-code a few things that will hopefully be available on most systems, no? (And then work on getting on a newer Firefox as soon as Z6 is out…) |
Oh, but after building on 20.04, it works. And the package name for libgdk-pixbuf is slightly different: 20.04:
22.04:
Seems like they fixed the package name? Hopefully in a way where depending on libgdk-pixbuf2.0-0 still works in 22.04? |
Yes.
That depends on how the build environment is laid out, and what you want to have show up as the channel.
I saw a recommendation for the latter option somewhere in de debian discussion list. I don't recall where, sorry. There is no material difference between these two.
It is unexpected to me, and since I run this script every two hours in a github action, it doesn't seem like it's something needed structurally. I'd venture to guess that this has to do with the distro picked for the docker image.
I'm not sure why you shouldn't see that? If I issue
Ubuntu 20.04.3 LTS (github actions) and Ubuntu 21.10 (my own system)
22.04 is not out yet though
this is the beta?
This is indeed picked up in build.py
It will work as long as the build environment has the same ESR as zotero uses. But I can disable this automatic pickup and add the required packages to the (existing) list of explicit packages.
That will work.
They sometimes include compatibility packages, but I can't say whether that will happen here. If there is no compat package, I can deal with it in a |
How did you run this BTW? I have Docker installed but how to use it is still pretty fuzzy to me. If I can replicate the run steps it'd be easier for me to change |
WRT to the differences between 20.10/20.04 and 22.04, another option than using a |
Thanks for the responses. I'll let you know if I have other questions, but you don't need to do anything else here — I've already made lots of changes for our purposes, and just need to do some more testing. |
In light of this, a |
(Haven't forgotten about this. Couldn't get to it for Zotero 6, but should be soon.) |
I have in the meantime gotten the backblaze hosting up and running; I'm deprecating the GH-release and the SF mirror (although I will keep them up), but for the moment, I'm good. Is there interest in the postinst detecting/adjusting for ChromeOS? Or is that best left to the user? |
Sorry if this is only tangentially related now, but I just ran deb2npm to see which node dependencies are already packaged and which are missing from Debian, which is relevant for any efforts/interest in packaging Zotero within Debian. Results below:
|
With the switch to a single standalone version, it's possible we should put out an official Ubuntu package. I have no idea what's involved with this, but https://forums.zotero.org/discussion/25317/install-zotero-standalone-from-ubuntu-linux-mint-ppa/p1 and https://github.com/smathot/zotero_installer are possibly relevant.
I'm not going to set this up (other than creating an official Zotero account somewhere if necessary), but if it can be made simple enough I'm happy to add it to the existing deploy scripts.
The text was updated successfully, but these errors were encountered: