Compare commits

...

77 commits
v0.5.2 ... main

Author SHA1 Message Date
Philipp Wolfer
499786cab9
Introduced Backend.Close method
This allows Backend implementations to free used resources.
Currently used for musicbrainzws2.Client
2025-06-10 08:30:26 +02:00
Philipp Wolfer
c1a480a1a6
Update dependencies 2025-06-10 08:09:44 +02:00
Philipp Wolfer
0c02466399 Moved JSONLFile to models package 2025-05-26 17:38:50 +02:00
Philipp Wolfer
ed0c31c00f
Update changelog 2025-05-25 15:54:25 +02:00
Philipp Wolfer
0115eca1c6
Minor code cleanup when creation time.Duration 2025-05-25 15:53:01 +02:00
Philipp Wolfer
78a05e9f54
Implemented deezer-history loves export 2025-05-25 15:49:06 +02:00
Philipp Wolfer
e85090fe4a
Implemented deezer-history backend listen import 2025-05-25 15:38:48 +02:00
Philipp Wolfer
1244405747
Moved archive package to public pkg/ 2025-05-25 12:51:36 +02:00
Philipp Wolfer
28c618ffce
Implemented tests and added documentation for archive 2025-05-25 12:46:44 +02:00
Philipp Wolfer
4da5697435
If dump does no write to file, output the result as log 2025-05-24 20:54:20 +02:00
Philipp Wolfer
312d9860cf
Fixed import log output duplicating 2025-05-24 20:43:02 +02:00
Philipp Wolfer
b1b0df7763
listenbrainz: fixed timestamp update with duplicates 2025-05-24 18:52:15 +02:00
Philipp Wolfer
b18a6c2104
Update changelog and README
Clarify that some services are not suited for full listen history export
2025-05-24 18:30:26 +02:00
Philipp Wolfer
c29b2e20cd
deezer: fixed endless export loop if user's listen history is empty 2025-05-24 18:22:42 +02:00
Philipp Wolfer
93767df567
Allow editing config option after renaming 2025-05-24 17:54:24 +02:00
Philipp Wolfer
1ef498943b
Renamed parameter for lbarchive also to "archive-file" 2025-05-24 17:38:19 +02:00
Philipp Wolfer
7fb77da135
Allow reading Spotify history directly from ZIP file 2025-05-24 17:35:19 +02:00
Philipp Wolfer
ef6780701a
Use ExtendTrackMetadata also for LB API loves export 2025-05-24 17:08:15 +02:00
Philipp Wolfer
f70b6248b6
Update musicbrainzws2 to fix rate limit issues 2025-05-24 16:48:38 +02:00
Philipp Wolfer
4ad89d287d
Rework ratelimit code
Simplify variables and avoid potential error if retry header reading fails
2025-05-24 16:47:13 +02:00
Philipp Wolfer
7542657925
Use LB API to lookup missing metadata for loves
This is faster than using the MBID API individually
2025-05-24 16:46:10 +02:00
Philipp Wolfer
dddd2e4eec
Implemented lbarchive loves export 2025-05-24 11:59:35 +02:00
Philipp Wolfer
d250952678
Extend dump backend to be able to write to a file 2025-05-24 11:59:09 +02:00
Philipp Wolfer
975e208254
Simplify dirArchive by using os.dirFS and have Archive.Open return fs.File 2025-05-24 02:20:07 +02:00
Philipp Wolfer
0231331209
Implemented listenrbainz.ExportArchive.IterFeedback 2025-05-24 01:23:12 +02:00
Philipp Wolfer
cf5319309a
Renamed listenbrainz.Archive to listenbrainz.ExportArchive 2025-05-24 00:51:28 +02:00
Philipp Wolfer
8462b9395e
Keep listenbrainz package internal for now 2025-05-24 00:47:40 +02:00
Philipp Wolfer
1025277ba9
Moved generic archive abstraction into separate package 2025-05-24 00:39:21 +02:00
Philipp Wolfer
424305518b
Implemented directory mode for listenbrainz-archive 2025-05-24 00:37:17 +02:00
Philipp Wolfer
92e7216fac
Implemented listenbrainz-archive backend with listen export support 2025-05-24 00:37:16 +02:00
Philipp Wolfer
5c56e480f1
Moved general LB related code to separate package 2025-05-24 00:37:16 +02:00
Philipp Wolfer
34b6bb9aa3
Use filepath.Join instead of file.Join 2025-05-24 00:37:05 +02:00
Philipp Wolfer
142d38e9db
Release 0.6.0 2025-05-23 10:10:08 +02:00
Philipp Wolfer
3b9d07e6b5 Implemented ScrobblerLog.ParseIter 2025-05-23 10:00:22 +02:00
Philipp Wolfer
15755458e9 Fixed iterProgress not stopping if yield returns false 2025-05-23 09:59:34 +02:00
Philipp Wolfer
5927f41a83
Revert "jspf/scrobblerlog: return results in batches"
This reverts commit a8ce2be5d7.
2025-05-23 08:52:23 +02:00
Philipp Wolfer
b7ce09041e
Fix potential zero division error in iterProgress 2025-05-23 08:12:37 +02:00
Philipp Wolfer
a8ce2be5d7
jspf/scrobblerlog: return results in batches
This allows the importer to start working while export is still in progress
2025-05-23 07:48:42 +02:00
Philipp Wolfer
c7af90b585
More granular progress report for JSPF and scrobblerlog 2025-05-23 07:47:52 +02:00
Philipp Wolfer
83eac8c801
Import progress shows actual number of processed items 2025-05-23 07:40:52 +02:00
Philipp Wolfer
12eb7acd98
Update changelog 2025-05-22 18:56:49 +02:00
Philipp Wolfer
e9768c0934
Upgrade musicbrainzws2 2025-05-22 18:45:39 +02:00
Philipp Wolfer
dacfb72f7d
Upgrade dependencies 2025-05-22 17:09:26 +02:00
Philipp Wolfer
20853f7601
Simplify context cancellation checks 2025-05-22 14:13:31 +02:00
Philipp Wolfer
4a66e3d432
Pass context to import backends 2025-05-22 11:53:08 +02:00
Philipp Wolfer
26d9f5e840
Pass context to export backends 2025-05-22 11:53:05 +02:00
Philipp Wolfer
b5bca1d4ab
Use context aware musicbrainzws2 2025-05-22 11:51:53 +02:00
Philipp Wolfer
d1642b7f1f
Make web service clients context aware 2025-05-22 11:51:53 +02:00
Philipp Wolfer
adfe3f5771
Use the transfer context also for the progress bars 2025-05-22 11:51:52 +02:00
Philipp Wolfer
3b545a0fd6
Prepare using a context for export / import
This will allow cancelling the export if the import fails
before the export finished.

For now the context isn't passed on to the actual export functions,
hence there is not yet any cancellation happening.
2025-05-22 11:51:51 +02:00
Philipp Wolfer
536fae6a46 ScrobblerLog.ReadHeader now accepts io.Reader 2025-05-22 11:51:23 +02:00
Philipp Wolfer
97600d8190 Update dependencies 2025-05-09 07:38:28 +02:00
Philipp Wolfer
a42b5d784d
Added short doc string to ratelimit package 2025-05-08 07:40:12 +02:00
Philipp Wolfer
a87c42059f
Use a WaitGroup to wait for both export and import goroutine to finish 2025-05-05 17:49:44 +02:00
Philipp Wolfer
17cee9cb8b
For import progress show actually processed and total count 2025-05-05 17:39:47 +02:00
Philipp Wolfer
b8e6ccffdb
Initial implementation of unified export/import progress
Both export and import progress get updated over a unified channel.
Most importantly this allows updating the import total from latest
export results.
2025-05-05 11:38:29 +02:00
Philipp Wolfer
1f48abc284
Fixed timestamp displayed after import not being the updated one 2025-05-04 15:18:14 +02:00
Philipp Wolfer
54fffce1d9
Update translation files 2025-05-04 13:31:44 +02:00
Philipp Wolfer
cb6a534fa1 Translated using Weblate (German)
Currently translated at 100.0% (55 of 55 strings)

Co-authored-by: Philipp Wolfer <phw@uploadedlobster.com>
Translate-URL: https://translate.uploadedlobster.com/projects/scotty/app/de/
Translation: Scotty/app
2025-05-04 11:25:07 +00:00
Philipp Wolfer
05f0e8d172
Change string for aborted progress bar 2025-05-04 13:24:12 +02:00
Philipp Wolfer
a8517ea249
funkwhale: fix progress abort on error 2025-05-04 13:22:51 +02:00
Philipp Wolfer
dfe6773744
Update translations 2025-05-04 13:07:02 +02:00
Philipp Wolfer
aae5123c3d
Show progress bars as aborted on export / import error 2025-05-04 13:06:48 +02:00
Philipp Wolfer
15d939e150
Update changelog 2025-05-04 12:56:50 +02:00
Philipp Wolfer
55ac41b147
If import fails still save the last reported timestamp
This allows continuing a partially failed import run.
2025-05-04 11:53:46 +02:00
Philipp Wolfer
069f0de2ee
Call "FinishImport" even on error
This gives the importer the chance to close connections
and free resources to ensure already imported items are
properly handled.
2025-05-04 11:53:45 +02:00
Philipp Wolfer
3b1adc9f1f
Fix duplicate calls to handle import errors
This fixes the import process hanging on error
2025-05-04 11:53:43 +02:00
Philipp Wolfer
1c3364dad5
Close export results channel in generic implementation
This removes the need for every implementation to handle this case.
2025-05-04 11:53:42 +02:00
Philipp Wolfer
9480c69cbb
Handle wait group for progress bar centrally
This does not need to be exposed and caller only
needs to wait for the Progress instance.
2025-05-04 11:53:35 +02:00
Philipp Wolfer
b3136bde9a
jspf: add MB extension, if it does not exist 2025-05-04 11:52:45 +02:00
Philipp Wolfer
8885e9cebc
Fix scrobblerlog timezone not being set from config 2025-05-02 21:35:14 +02:00
Philipp Wolfer
bd7a35cd68
Update dependencies 2025-05-02 16:28:54 +02:00
Philipp Wolfer
d757129bd7
jspf: also set username and recording MSID in exports 2025-05-01 15:20:37 +02:00
Philipp Wolfer
a645ec5c78
JSPF: Implemented export as loves and listens 2025-05-01 15:10:00 +02:00
Philipp Wolfer
cfc3cd522d
scrobblerlog: fix listen export not considering latest timestamp 2025-05-01 14:09:12 +02:00
Philipp Wolfer
443734e4c7
jspf: write duration to exported JSPF 2025-05-01 13:48:21 +02:00
Philipp Wolfer
588a6cf96f
Document the scrobblerlog package 2025-05-01 13:22:20 +02:00
73 changed files with 3350 additions and 1090 deletions

View file

@ -1,5 +1,45 @@
# Scotty Changelog
## 0.7.0 - WIP
- listenbrainz-archive: new backend to load listens and loves from a
ListenBrainz export. The data can be read from the downloaded ZIP archive
or a directory where the contents of the archive have been extracted to.
- listenbrainz: faster loading of missing loves metadata using the ListenBrainz
API instead of MusicBrainz. Fallback to slower MusicBrainz query, if
ListenBrainz does not provide the data.
- listenbrainz: fixed issue were timestamp was not updated properly if
duplicate listens where detected during import.
- spotify-history: it is now possible to specify the path directly to the
`my_spotify_data_extended.zip` ZIP file as downloaded from Spotify.
- spotify-history: the parameter to the export archive path has been renamed to
`archive-path`. For backward compatibility the old `dir-path` parameter is
still read.
- deezer-history: new backend to import listens and loves from Deezer data export.
- deezer: fixed endless export loop if the user's listen history was empty.
- dump: it is now possible to specify a file to write the text output to.
- Fixed potential issues with MusicBrainz rate limiting.
- Fixed import log output duplicating.
## 0.6.0 - 2025-05-23
- Fully reworked progress report
- Cancel both export and import on error
- Show progress bars as aborted on export / import error
- The import progress is now aware of the total amount of exported items
- The import progress shows total items processed instead of time estimate
- Fix program hanging endlessly if import fails (#11)
- If import fails still store the last successfully imported timestamp
- More granular progress updates for JSPF and scrobblerlog
- JSPF: implemented export as loves and listens
- JSPF: write track duration
- JSPF: read username and recording MSID
- JSPF: add MusicBrainz playlist extension in append mode, if it does not
exist in the existing JSPF file
- scrobblerlog: fix timezone not being set from config (#6)
- scrobblerlog: fix listen export not considering latest timestamp
- Funkwhale: fix progress abort on error
## 0.5.2 - 2025-05-01
- ListenBrainz: fixed loves export not considering latest timestamp
@ -16,9 +56,9 @@
- ListenBrainz: log missing recording MBID on love import
- Subsonic: support OpenSubsonic fields for recording MBID and genres (#5)
- Subsonic: fixed progress for loves export
- scrobblerlog: add "time-zone" config option (#6).
- scrobblerlog: add "time-zone" config option (#6)
- scrobblerlog: fixed progress for listen export
- scrobblerlog: renamed setting `include-skipped` to `ignore-skipped`.
- scrobblerlog: renamed setting `include-skipped` to `ignore-skipped`
Note: 386 builds for Linux are not available with this release due to an
incompatibility with latest version of gorm.

View file

@ -118,12 +118,14 @@ scotty beam listens deezer listenbrainz --timestamp "2023-12-06 14:26:24"
The following table lists the available backends and the currently supported features.
Backend | Listens Export | Listens Import | Loves Export | Loves Import
----------------|----------------|----------------|--------------|-------------
---------------------|----------------|----------------|--------------|-------------
deezer | ✓ | | ✓ | -
deezer-history | ✓ | | ✓ |
funkwhale | ✓ | | ✓ | -
jspf | - | ✓ | - | ✓
jspf | ✓ | ✓ | ✓ | ✓
lastfm | ✓ | ✓ | ✓ | ✓
listenbrainz | ✓ | ✓ | ✓ | ✓
listenbrainz-archive | ✓ | - | ✓ | -
maloja | ✓ | ✓ | |
scrobbler-log | ✓ | ✓ | |
spotify | ✓ | | ✓ | -
@ -134,6 +136,12 @@ subsonic | | | ✓ | -
See the comments in [config.example.toml](./config.example.toml) for a description of each backend's available configuration options.
**NOTE:** Some services, e.g. the Spotify and Deezer API, do not provide access
to the user's full listening history. Hence the API integrations are not suited
to do a full history export. They can however be well used for continuously
transfer recent listens to other services when running scotty frequently, e.g.
as a cron job.
## Contribute
The source code for Scotty is available on [SourceHut](https://sr.ht/~phw/scotty/). To report issues or feature requests please [create a ticket](https://todo.sr.ht/~phw/scotty).
@ -145,7 +153,7 @@ You can help translate this project into your language with [Weblate](https://tr
## License
Scotty © 2023-2024 Philipp Wolfer <phw@uploadedlobster.com>
Scotty © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Scotty is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

View file

@ -19,6 +19,15 @@ token = ""
# not already exists in your ListenBrainz profile.
check-duplicate-listens = false
[service.listenbrainz-archive]
# This backend supports listens from a ListenBrainz export archive
# (https://listenbrainz.org/settings/export/).
backend = "listenbrainz-archive"
# The file path to the ListenBrainz export archive. The path can either point
# to the ZIP file as downloaded from ListenBrainz or a directory were the
# ZIP was extracted to.
archive-path = "./listenbrainz_outsidecontext.zip"
[service.maloja]
# Maloja is a self hosted listening service (https://github.com/krateng/maloja)
backend = "maloja"
@ -87,6 +96,8 @@ identifier = ""
[service.spotify]
# Read listens and loves from a Spotify account
# NOTE: The Spotify API does not allow access to the full listen history,
# but only to recent listens.
backend = "spotify"
# You need to register an application on https://developer.spotify.com/
# and set the client ID and client secret below.
@ -98,9 +109,11 @@ client-secret = ""
[service.spotify-history]
# Read listens from a Spotify extended history export
backend = "spotify-history"
# Directory where the extended history JSON files are located. The files must
# follow the naming scheme "Streaming_History_Audio_*.json".
dir-path = "./my_spotify_data_extended/Spotify Extended Streaming History"
# Path to the Spotify extended history archive. This can either point directly
# to the "my_spotify_data_extended.zip" ZIP file provided by Spotify or a
# directory where this file has been extracted to. The history files are
# expected to follow the naming pattern "Streaming_History_Audio_*.json".
archive-path = "./my_spotify_data_extended.zip"
# If true (default), ignore listens from a Spotify "private session".
ignore-incognito = true
# If true, ignore listens marked as skipped. Default is false.
@ -111,7 +124,9 @@ ignore-skipped = false
ignore-min-duration-seconds = 30
[service.deezer]
# Read listens and loves from a Deezer account
# Read listens and loves from a Deezer account.
# NOTE: The Deezer API does not allow access to the full listen history,
# but only to recent listens.
backend = "deezer"
# You need to register an application on https://developers.deezer.com/myapps
# and set the client ID and client secret below.
@ -120,6 +135,15 @@ backend = "deezer"
client-id = ""
client-secret = ""
[service.deezer-history]
# Read listens from a Deezer data export.
# You can request a download of all your Deezer data, including the complete
# listen history, in the section "My information" in your Deezer
# "Account settings".
backend = "deezer-history"
# Path to XLSX file provided by Deezer, e.g. "deezer-data_520704045.xlsx".
file-path = ""
[service.lastfm]
backend = "lastfm"
# Your Last.fm username
@ -135,3 +159,9 @@ client-secret = ""
# This backend allows writing listens and loves as console output. Useful for
# debugging the export from other services.
backend = "dump"
# Path to a file where the listens and loves are written to. If not set,
# the output is written to stdout.
file-path = ""
# If true (default), new listens will be appended to the existing file. Set to
# false to overwrite the file on every run.
append = true

42
go.mod
View file

@ -15,19 +15,21 @@ require (
github.com/manifoldco/promptui v0.9.0
github.com/pelletier/go-toml/v2 v2.2.4
github.com/shkh/lastfm-go v0.0.0-20191215035245-89a801c244e0
github.com/spf13/cast v1.7.1
github.com/simonfrey/jsonl v0.0.0-20240904112901-935399b9a740
github.com/spf13/cast v1.9.2
github.com/spf13/cobra v1.9.1
github.com/spf13/viper v1.20.1
github.com/stretchr/testify v1.10.0
github.com/supersonic-app/go-subsonic v0.0.0-20241224013245-9b2841f3711d
github.com/vbauerster/mpb/v8 v8.9.3
github.com/vbauerster/mpb/v8 v8.10.2
github.com/xuri/excelize/v2 v2.9.1
go.uploadedlobster.com/mbtypes v0.4.0
go.uploadedlobster.com/musicbrainzws2 v0.14.0
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0
golang.org/x/oauth2 v0.29.0
golang.org/x/text v0.24.0
go.uploadedlobster.com/musicbrainzws2 v0.16.0
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476
golang.org/x/oauth2 v0.30.0
golang.org/x/text v0.26.0
gorm.io/datatypes v1.2.5
gorm.io/gorm v1.26.0
gorm.io/gorm v1.30.0
)
require (
@ -51,25 +53,31 @@ require (
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/richardlehane/mscfb v1.0.4 // indirect
github.com/richardlehane/msoleps v1.0.4 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/sagikazarmark/locafero v0.9.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.14.0 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tiendc/go-deepcopy v1.6.1 // indirect
github.com/xuri/efp v0.0.1 // indirect
github.com/xuri/nfp v0.0.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/image v0.26.0 // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/net v0.39.0 // indirect
golang.org/x/sync v0.13.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/tools v0.32.0 // indirect
golang.org/x/crypto v0.39.0 // indirect
golang.org/x/image v0.28.0 // indirect
golang.org/x/mod v0.25.0 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/sync v0.15.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/tools v0.34.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
gorm.io/driver/mysql v1.5.7 // indirect
modernc.org/libc v1.64.0 // indirect
gorm.io/driver/mysql v1.6.0 // indirect
modernc.org/libc v1.65.10 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.10.0 // indirect
modernc.org/sqlite v1.37.0 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.38.0 // indirect
)
tool golang.org/x/text/cmd/gotext

101
go.sum
View file

@ -40,7 +40,6 @@ github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GM
github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ=
github.com/go-resty/resty/v2 v2.16.5 h1:hBKqmWrr7uRc3euHVqmh1HTHcKn99Smr7o5spptdhTM=
github.com/go-resty/resty/v2 v2.16.5/go.mod h1:hkJtXbA2iKHzJheXYvQ8snQES5ZLGKMwQ07xAwp/fiA=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/go-sql-driver/mysql v1.9.2 h1:4cNKDYQ1I84SXslGddlsrMhc8k4LeDVj6Ad6WRjiHuU=
github.com/go-sql-driver/mysql v1.9.2/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/go-viper/mapstructure/v2 v2.2.1 h1:ZAaOCxANMuZx5RCeg0mBdEZk7DZasvvZIxtHqx8aGss=
@ -97,6 +96,11 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/richardlehane/mscfb v1.0.4 h1:WULscsljNPConisD5hR0+OyZjwK46Pfyr6mPu5ZawpM=
github.com/richardlehane/mscfb v1.0.4/go.mod h1:YzVpcZg9czvAuhk9T+a3avCpcFPMUWm7gK3DypaEsUk=
github.com/richardlehane/msoleps v1.0.1/go.mod h1:BWev5JBpU9Ko2WAgmZEuiz4/u3ZYTKbjLycmwiWUfWg=
github.com/richardlehane/msoleps v1.0.4 h1:WuESlvhX3gH2IHcd8UqyCuFY5yiq/GR/yqaSM/9/g00=
github.com/richardlehane/msoleps v1.0.4/go.mod h1:BWev5JBpU9Ko2WAgmZEuiz4/u3ZYTKbjLycmwiWUfWg=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
@ -107,12 +111,14 @@ github.com/sagikazarmark/locafero v0.9.0 h1:GbgQGNtTrEmddYDSAH9QLRyfAHY12md+8YFT
github.com/sagikazarmark/locafero v0.9.0/go.mod h1:UBUyz37V+EdMS3hDF3QWIiVr/2dPrx49OMO0Bn0hJqk=
github.com/shkh/lastfm-go v0.0.0-20191215035245-89a801c244e0 h1:cgqwZtnR+IQfUYDLJ3Kiy4aE+O/wExTzEIg8xwC4Qfs=
github.com/shkh/lastfm-go v0.0.0-20191215035245-89a801c244e0/go.mod h1:n3nudMl178cEvD44PaopxH9jhJaQzthSxUzLO5iKMy4=
github.com/simonfrey/jsonl v0.0.0-20240904112901-935399b9a740 h1:CXJI+lliMiiEwzfgE8yt/38K0heYDgQ0L3f/3fxRnQU=
github.com/simonfrey/jsonl v0.0.0-20240904112901-935399b9a740/go.mod h1:G4w16caPmc6at7u4fmkj/8OAoOnM9mkmJr2fvL0vhaw=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spf13/afero v1.14.0 h1:9tH6MapGnn/j0eb0yIXiLjERO8RB6xIVZRDCX7PtqWA=
github.com/spf13/afero v1.14.0/go.mod h1:acJQ8t0ohCGuMN3O+Pv0V0hgMxNYDlvdk+VTfyZmbYo=
github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y=
github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cast v1.9.2 h1:SsGfm7M8QOFtEzumm7UZrZdLLquNdzFYfIbEXntcFbE=
github.com/spf13/cast v1.9.2/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
@ -125,41 +131,49 @@ github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/supersonic-app/go-subsonic v0.0.0-20241224013245-9b2841f3711d h1:70+Nn7yh+cfeKqqXVTdpneFqXuvrBLyP7U6GVUsjTU4=
github.com/supersonic-app/go-subsonic v0.0.0-20241224013245-9b2841f3711d/go.mod h1:D+OWPXeD9owcdcoXATv5YPBGWxxVvn5k98rt5B4wMc4=
github.com/vbauerster/mpb/v8 v8.9.3 h1:PnMeF+sMvYv9u23l6DO6Q3+Mdj408mjLRXIzmUmU2Z8=
github.com/vbauerster/mpb/v8 v8.9.3/go.mod h1:hxS8Hz4C6ijnppDSIX6LjG8FYJSoPo9iIOcE53Zik0c=
github.com/tiendc/go-deepcopy v1.6.1 h1:uVRTItFeNHkMcLueHS7OCsxgxT9P8MzGB/taUa2Y4Tk=
github.com/tiendc/go-deepcopy v1.6.1/go.mod h1:toXoeQoUqXOOS/X4sKuiAoSk6elIdqc0pN7MTgOOo2I=
github.com/vbauerster/mpb/v8 v8.10.2 h1:2uBykSHAYHekE11YvJhKxYmLATKHAGorZwFlyNw4hHM=
github.com/vbauerster/mpb/v8 v8.10.2/go.mod h1:+Ja4P92E3/CorSZgfDtK46D7AVbDqmBQRTmyTqPElo0=
github.com/xuri/efp v0.0.1 h1:fws5Rv3myXyYni8uwj2qKjVaRP30PdjeYe2Y6FDsCL8=
github.com/xuri/efp v0.0.1/go.mod h1:ybY/Jr0T0GTCnYjKqmdwxyxn2BQf2RcQIIvex5QldPI=
github.com/xuri/excelize/v2 v2.9.1 h1:VdSGk+rraGmgLHGFaGG9/9IWu1nj4ufjJ7uwMDtj8Qw=
github.com/xuri/excelize/v2 v2.9.1/go.mod h1:x7L6pKz2dvo9ejrRuD8Lnl98z4JLt0TGAwjhW+EiP8s=
github.com/xuri/nfp v0.0.1 h1:MDamSGatIvp8uOmDP8FnmjuQpu90NzdJxo7242ANR9Q=
github.com/xuri/nfp v0.0.1/go.mod h1:WwHg+CVyzlv/TX9xqBFXEZAuxOPxn2k1GNHwG41IIUQ=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uploadedlobster.com/mbtypes v0.4.0 h1:D5asCgHsRWufj4Yn5u0IuH2J9z1UuYImYkYIp1Z1Q7s=
go.uploadedlobster.com/mbtypes v0.4.0/go.mod h1:Bu1K1Hl77QTAE2Z7QKiW/JAp9KqYWQebkRRfG02dlZM=
go.uploadedlobster.com/musicbrainzws2 v0.14.0 h1:YaEtxNwLSNT1gzFipQ4XlaThNfXjBpzzb4I6WhIeUwg=
go.uploadedlobster.com/musicbrainzws2 v0.14.0/go.mod h1:T6sYE7ZHRH3mJWT3g9jdSUPKJLZubnBjKyjMPNdkgao=
go.uploadedlobster.com/musicbrainzws2 v0.16.0 h1:Boux1cZg5S559G/pbQC35BoF+1H7I56oxhBwg8Nzhs0=
go.uploadedlobster.com/musicbrainzws2 v0.16.0/go.mod h1:T6sYE7ZHRH3mJWT3g9jdSUPKJLZubnBjKyjMPNdkgao=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 h1:R84qjqJb5nVJMxqWYb3np9L5ZsaDtB+a39EqjV0JSUM=
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0/go.mod h1:S9Xr4PYopiDyqSyp5NjCrhFrqg6A5zA2E/iPHPhqnS8=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 h1:bsqhLWFR6G6xiQcb+JoGqdKdRU6WzPWmK8E0jxTjzo4=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
golang.org/x/image v0.13.0/go.mod h1:6mmbMOeV28HuMTgA6OSRkdXKYw/t5W9Uwn2Yv1r3Yxk=
golang.org/x/image v0.26.0 h1:4XjIFEZWQmCZi6Wv8BoxsDhRU3RVnLX04dToTDAEPlY=
golang.org/x/image v0.26.0/go.mod h1:lcxbMFAovzpnJxzXS3nyL83K27tmqtKzIJpctK8YO5c=
golang.org/x/image v0.28.0 h1:gdem5JW1OLS4FbkWgLO+7ZeFzYtL3xClb97GaUzYMFE=
golang.org/x/image v0.28.0/go.mod h1:GUJYXtnGKEUgggyzh+Vxt+AviiCcyiwpsl8iQ8MvwGY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/oauth2 v0.29.0 h1:WdYw2tdTK1S8olAzWHdgeqfy+Mtm9XNhv/xJsY65d98=
golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -169,8 +183,8 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@ -179,16 +193,16 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.32.0 h1:Q7N1vhpkQv7ybVzLFtTjvQya2ewbwNDZzUgfXGqtMWU=
golang.org/x/tools v0.32.0/go.mod h1:ZxrU41P/wAbZD8EDa6dDCa6XfpkhJ7HFMjHJXfBDu8s=
golang.org/x/tools v0.34.0 h1:qIpSLOxeCYGg9TrcJokLBG4KFA6d795g0xkBkiESGlo=
golang.org/x/tools v0.34.0/go.mod h1:pAP9OwEaY1CAW3HOmg3hLZC5Z0CCmzjAF2UQMSqNARg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
@ -197,37 +211,36 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/datatypes v1.2.5 h1:9UogU3jkydFVW1bIVVeoYsTpLRgwDVW3rHfJG6/Ek9I=
gorm.io/datatypes v1.2.5/go.mod h1:I5FUdlKpLb5PMqeMQhm30CQ6jXP8Rj89xkTeCSAaAD4=
gorm.io/driver/mysql v1.5.7 h1:MndhOPYOfEp2rHKgkZIhJ16eVUIRf2HmzgoPmh7FCWo=
gorm.io/driver/mysql v1.5.7/go.mod h1:sEtPWMiqiN1N1cMXoXmBbd8C6/l+TESwriotuRRpkDM=
gorm.io/driver/mysql v1.6.0 h1:eNbLmNTpPpTOVZi8MMxCi2aaIm0ZpInbORNXDwyLGvg=
gorm.io/driver/mysql v1.6.0/go.mod h1:D/oCC2GWK3M/dqoLxnOlaNKmXz8WNTfcS9y5ovaSqKo=
gorm.io/driver/postgres v1.5.0 h1:u2FXTy14l45qc3UeCJ7QaAXZmZfDDv0YrthvmRq1l0U=
gorm.io/driver/postgres v1.5.0/go.mod h1:FUZXzO+5Uqg5zzwzv4KK49R8lvGIyscBOqYrtI1Ce9A=
gorm.io/driver/sqlite v1.4.3 h1:HBBcZSDnWi5BW3B3rwvVTc510KGkBkexlOg0QrmLUuU=
gorm.io/driver/sqlite v1.4.3/go.mod h1:0Aq3iPO+v9ZKbcdiz8gLWRw5VOPcBOPUQJFLq5e2ecI=
gorm.io/driver/sqlserver v1.5.4 h1:xA+Y1KDNspv79q43bPyjDMUgHoYHLhXYmdFcYPobg8g=
gorm.io/driver/sqlserver v1.5.4/go.mod h1:+frZ/qYmuna11zHPlh5oc2O6ZA/lS88Keb0XSH1Zh/g=
gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
gorm.io/gorm v1.26.0 h1:9lqQVPG5aNNS6AyHdRiwScAVnXHg/L/Srzx55G5fOgs=
gorm.io/gorm v1.26.0/go.mod h1:8Z33v652h4//uMA76KjeDH8mJXPm1QNCYrMeatR0DOE=
modernc.org/cc/v4 v4.26.0 h1:QMYvbVduUGH0rrO+5mqF/PSPPRZNpRtg2CLELy7vUpA=
modernc.org/cc/v4 v4.26.0/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.26.0 h1:gVzXaDzGeBYJ2uXTOpR8FR7OlksDOe9jxnjhIKCsiTc=
modernc.org/ccgo/v4 v4.26.0/go.mod h1:Sem8f7TFUtVXkG2fiaChQtyyfkqhJBg/zjEJBkmuAVY=
modernc.org/fileutil v1.3.1 h1:8vq5fe7jdtEvoCf3Zf9Nm0Q05sH6kGx0Op2CPx1wTC8=
modernc.org/fileutil v1.3.1/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
gorm.io/gorm v1.30.0 h1:qbT5aPv1UH8gI99OsRlvDToLxW5zR7FzS9acZDOZcgs=
gorm.io/gorm v1.30.0/go.mod h1:8Z33v652h4//uMA76KjeDH8mJXPm1QNCYrMeatR0DOE=
modernc.org/cc/v4 v4.26.1 h1:+X5NtzVBn0KgsBCBe+xkDC7twLb/jNVj9FPgiwSQO3s=
modernc.org/cc/v4 v4.26.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.28.0 h1:rjznn6WWehKq7dG4JtLRKxb52Ecv8OUGah8+Z/SfpNU=
modernc.org/ccgo/v4 v4.28.0/go.mod h1:JygV3+9AV6SmPhDasu4JgquwU81XAKLd3OKTUDNOiKE=
modernc.org/fileutil v1.3.3 h1:3qaU+7f7xxTUmvU1pJTZiDLAIoJVdUSSauJNHg9yXoA=
modernc.org/fileutil v1.3.3/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/libc v1.64.0 h1:U0k8BD2d3cD3e9I8RLcZgJBHAcsJzbXx5mKGSb5pyJA=
modernc.org/libc v1.64.0/go.mod h1:7m9VzGq7APssBTydds2zBcxGREwvIGpuUBaKTXdm2Qs=
modernc.org/libc v1.65.10 h1:ZwEk8+jhW7qBjHIT+wd0d9VjitRyQef9BnzlzGwMODc=
modernc.org/libc v1.65.10/go.mod h1:StFvYpx7i/mXtBAfVOjaU0PWZOvIRoZSgXhrwXzr8Po=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.10.0 h1:fzumd51yQ1DxcOxSO+S6X7+QTuVU+n8/Aj7swYjFfC4=
modernc.org/memory v1.10.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.37.0 h1:s1TMe7T3Q3ovQiK2Ouz4Jwh7dw4ZDqbebSDTlSJdfjI=
modernc.org/sqlite v1.37.0/go.mod h1:5YiWv+YviqGMuGw4V+PNplcyaJ5v+vQd7TQOgkACoJM=
modernc.org/sqlite v1.38.0 h1:+4OrfPQ8pxHKuWG4md1JpR/EYAh3Md7TdejuuzE7EUI=
modernc.org/sqlite v1.38.0/go.mod h1:1Bj+yES4SVvBZ4cBOpVZ6QgesMCKpJZDq0nxYzOpmNE=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=

View file

@ -23,10 +23,12 @@ import (
"strings"
"go.uploadedlobster.com/scotty/internal/backends/deezer"
"go.uploadedlobster.com/scotty/internal/backends/deezerhistory"
"go.uploadedlobster.com/scotty/internal/backends/dump"
"go.uploadedlobster.com/scotty/internal/backends/funkwhale"
"go.uploadedlobster.com/scotty/internal/backends/jspf"
"go.uploadedlobster.com/scotty/internal/backends/lastfm"
"go.uploadedlobster.com/scotty/internal/backends/lbarchive"
"go.uploadedlobster.com/scotty/internal/backends/listenbrainz"
"go.uploadedlobster.com/scotty/internal/backends/maloja"
"go.uploadedlobster.com/scotty/internal/backends/scrobblerlog"
@ -106,11 +108,13 @@ func GetBackends() BackendList {
var knownBackends = map[string]func() models.Backend{
"deezer": func() models.Backend { return &deezer.DeezerApiBackend{} },
"deezer-history": func() models.Backend { return &deezerhistory.DeezerHistoryBackend{} },
"dump": func() models.Backend { return &dump.DumpBackend{} },
"funkwhale": func() models.Backend { return &funkwhale.FunkwhaleApiBackend{} },
"jspf": func() models.Backend { return &jspf.JSPFBackend{} },
"lastfm": func() models.Backend { return &lastfm.LastfmApiBackend{} },
"listenbrainz": func() models.Backend { return &listenbrainz.ListenBrainzApiBackend{} },
"listenbrainz-archive": func() models.Backend { return &lbarchive.ListenBrainzArchiveBackend{} },
"maloja": func() models.Backend { return &maloja.MalojaApiBackend{} },
"scrobbler-log": func() models.Backend { return &scrobblerlog.ScrobblerLogBackend{} },
"spotify": func() models.Backend { return &spotify.SpotifyApiBackend{} },

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -18,21 +18,23 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package backends_test
import (
"reflect"
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"go.uploadedlobster.com/scotty/internal/backends"
"go.uploadedlobster.com/scotty/internal/backends/deezer"
"go.uploadedlobster.com/scotty/internal/backends/deezerhistory"
"go.uploadedlobster.com/scotty/internal/backends/dump"
"go.uploadedlobster.com/scotty/internal/backends/funkwhale"
"go.uploadedlobster.com/scotty/internal/backends/jspf"
"go.uploadedlobster.com/scotty/internal/backends/lastfm"
"go.uploadedlobster.com/scotty/internal/backends/lbarchive"
"go.uploadedlobster.com/scotty/internal/backends/listenbrainz"
"go.uploadedlobster.com/scotty/internal/backends/maloja"
"go.uploadedlobster.com/scotty/internal/backends/scrobblerlog"
"go.uploadedlobster.com/scotty/internal/backends/spotify"
"go.uploadedlobster.com/scotty/internal/backends/spotifyhistory"
"go.uploadedlobster.com/scotty/internal/backends/subsonic"
"go.uploadedlobster.com/scotty/internal/config"
"go.uploadedlobster.com/scotty/internal/i18n"
@ -85,6 +87,8 @@ func TestImplementsInterfaces(t *testing.T) {
expectInterface[models.LovesExport](t, &deezer.DeezerApiBackend{})
// expectInterface[models.LovesImport](t, &deezer.DeezerApiBackend{})
expectInterface[models.ListensExport](t, &deezerhistory.DeezerHistoryBackend{})
expectInterface[models.ListensImport](t, &dump.DumpBackend{})
expectInterface[models.LovesImport](t, &dump.DumpBackend{})
@ -93,9 +97,9 @@ func TestImplementsInterfaces(t *testing.T) {
expectInterface[models.LovesExport](t, &funkwhale.FunkwhaleApiBackend{})
// expectInterface[models.LovesImport](t, &funkwhale.FunkwhaleApiBackend{})
// expectInterface[models.ListensExport](t, &jspf.JSPFBackend{})
expectInterface[models.ListensExport](t, &jspf.JSPFBackend{})
expectInterface[models.ListensImport](t, &jspf.JSPFBackend{})
// expectInterface[models.LovesExport](t, &jspf.JSPFBackend{})
expectInterface[models.LovesExport](t, &jspf.JSPFBackend{})
expectInterface[models.LovesImport](t, &jspf.JSPFBackend{})
// expectInterface[models.ListensExport](t, &lastfm.LastfmApiBackend{})
@ -103,6 +107,11 @@ func TestImplementsInterfaces(t *testing.T) {
expectInterface[models.LovesExport](t, &lastfm.LastfmApiBackend{})
expectInterface[models.LovesImport](t, &lastfm.LastfmApiBackend{})
expectInterface[models.ListensExport](t, &lbarchive.ListenBrainzArchiveBackend{})
// expectInterface[models.ListensImport](t, &lbarchive.ListenBrainzArchiveBackend{})
expectInterface[models.LovesExport](t, &lbarchive.ListenBrainzArchiveBackend{})
// expectInterface[models.LovesImport](t, &lbarchive.ListenBrainzArchiveBackend{})
expectInterface[models.ListensExport](t, &listenbrainz.ListenBrainzApiBackend{})
expectInterface[models.ListensImport](t, &listenbrainz.ListenBrainzApiBackend{})
expectInterface[models.LovesExport](t, &listenbrainz.ListenBrainzApiBackend{})
@ -115,6 +124,8 @@ func TestImplementsInterfaces(t *testing.T) {
expectInterface[models.LovesExport](t, &spotify.SpotifyApiBackend{})
// expectInterface[models.LovesImport](t, &spotify.SpotifyApiBackend{})
expectInterface[models.ListensExport](t, &spotifyhistory.SpotifyHistoryBackend{})
expectInterface[models.ListensExport](t, &scrobblerlog.ScrobblerLogBackend{})
expectInterface[models.ListensImport](t, &scrobblerlog.ScrobblerLogBackend{})
@ -125,6 +136,6 @@ func TestImplementsInterfaces(t *testing.T) {
func expectInterface[T interface{}](t *testing.T, backend models.Backend) {
ok, name := backends.ImplementsInterface[T](&backend)
if !ok {
t.Errorf("%v expected to implement %v", reflect.TypeOf(backend).Name(), name)
t.Errorf("%v expected to implement %v", backend.Name(), name)
}
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -23,6 +23,7 @@ THE SOFTWARE.
package deezer
import (
"context"
"errors"
"strconv"
@ -52,14 +53,14 @@ func NewClient(token oauth2.TokenSource) Client {
}
}
func (c Client) UserHistory(offset int, limit int) (result HistoryResult, err error) {
func (c Client) UserHistory(ctx context.Context, offset int, limit int) (result HistoryResult, err error) {
const path = "/user/me/history"
return listRequest[HistoryResult](c, path, offset, limit)
return listRequest[HistoryResult](ctx, c, path, offset, limit)
}
func (c Client) UserTracks(offset int, limit int) (TracksResult, error) {
func (c Client) UserTracks(ctx context.Context, offset int, limit int) (TracksResult, error) {
const path = "/user/me/tracks"
return listRequest[TracksResult](c, path, offset, limit)
return listRequest[TracksResult](ctx, c, path, offset, limit)
}
func (c Client) setToken(req *resty.Request) error {
@ -72,8 +73,9 @@ func (c Client) setToken(req *resty.Request) error {
return nil
}
func listRequest[T Result](c Client, path string, offset int, limit int) (result T, err error) {
func listRequest[T Result](ctx context.Context, c Client, path string, offset int, limit int) (result T, err error) {
request := c.HTTPClient.R().
SetContext(ctx).
SetQueryParams(map[string]string{
"index": strconv.Itoa(offset),
"limit": strconv.Itoa(limit),

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -23,6 +23,7 @@ THE SOFTWARE.
package deezer_test
import (
"context"
"net/http"
"testing"
@ -48,7 +49,8 @@ func TestGetUserHistory(t *testing.T) {
"https://api.deezer.com/user/me/history",
"testdata/user-history.json")
result, err := client.UserHistory(0, 2)
ctx := context.Background()
result, err := client.UserHistory(ctx, 0, 2)
require.NoError(t, err)
assert := assert.New(t)
@ -69,7 +71,8 @@ func TestGetUserTracks(t *testing.T) {
"https://api.deezer.com/user/me/tracks",
"testdata/user-tracks.json")
result, err := client.UserTracks(0, 2)
ctx := context.Background()
result, err := client.UserTracks(ctx, 0, 2)
require.NoError(t, err)
assert := assert.New(t)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
@ -16,6 +16,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package deezer
import (
"context"
"fmt"
"math"
"net/url"
@ -37,6 +38,8 @@ type DeezerApiBackend struct {
func (b *DeezerApiBackend) Name() string { return "deezer" }
func (b *DeezerApiBackend) Close() {}
func (b *DeezerApiBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "client-id",
@ -77,7 +80,7 @@ func (b *DeezerApiBackend) OAuth2Setup(token oauth2.TokenSource) error {
return nil
}
func (b *DeezerApiBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
func (b *DeezerApiBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
// Choose a high offset, we attempt to search the loves backwards starting
// at the oldest one.
offset := math.MaxInt32
@ -88,23 +91,30 @@ func (b *DeezerApiBackend) ExportListens(oldestTimestamp time.Time, results chan
totalDuration := startTime.Sub(oldestTimestamp)
defer close(results)
p := models.Progress{Total: int64(totalDuration.Seconds())}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(totalDuration.Seconds()),
},
}
out:
for {
result, err := b.client.UserHistory(offset, perPage)
result, err := b.client.UserHistory(ctx, offset, perPage)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
// No result, break immediately
if result.Total == 0 {
break out
}
// The offset was higher then the actual number of tracks. Adjust the offset
// and continue.
if offset >= result.Total {
p.Total = int64(result.Total)
offset = max(result.Total-perPage, 0)
continue
}
@ -130,7 +140,8 @@ out:
}
remainingTime := startTime.Sub(minTime)
p.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
p.Export.TotalItems += len(listens)
p.Export.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
progress <- p
results <- models.ListensResult{Items: listens, OldestTimestamp: minTime}
@ -146,25 +157,29 @@ out:
}
results <- models.ListensResult{OldestTimestamp: minTime}
progress <- p.Complete()
p.Export.Complete()
progress <- p
}
func (b *DeezerApiBackend) ExportLoves(oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.Progress) {
func (b *DeezerApiBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
// Choose a high offset, we attempt to search the loves backwards starting
// at the oldest one.
offset := math.MaxInt32
perPage := MaxItemsPerGet
defer close(results)
p := models.Progress{Total: int64(perPage)}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(perPage),
},
}
var totalCount int
out:
for {
result, err := b.client.UserTracks(offset, perPage)
result, err := b.client.UserTracks(ctx, offset, perPage)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
@ -172,8 +187,8 @@ out:
// The offset was higher then the actual number of tracks. Adjust the offset
// and continue.
if offset >= result.Total {
p.Total = int64(result.Total)
totalCount = result.Total
p.Export.Total = int64(totalCount)
offset = max(result.Total-perPage, 0)
continue
}
@ -190,13 +205,14 @@ out:
loves = append(loves, love)
} else {
totalCount -= 1
break
}
}
sort.Sort(loves)
results <- models.LovesResult{Items: loves, Total: totalCount}
p.Elapsed += int64(count)
p.Export.TotalItems = totalCount
p.Export.Total = int64(totalCount)
p.Export.Elapsed += int64(count)
progress <- p
if offset <= 0 {
@ -210,7 +226,8 @@ out:
}
}
progress <- p.Complete()
p.Export.Complete()
progress <- p
}
func (t Listen) AsListen() models.Listen {
@ -236,7 +253,7 @@ func (t Track) AsTrack() models.Track {
TrackName: t.Title,
ReleaseName: t.Album.Title,
ArtistNames: []string{t.Artist.Name},
Duration: time.Duration(t.Duration * int(time.Second)),
Duration: time.Duration(t.Duration) * time.Second,
AdditionalInfo: map[string]any{},
}

View file

@ -0,0 +1,208 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later version.
Scotty is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
Scotty. If not, see <https://www.gnu.org/licenses/>.
*/
package deezerhistory
import (
"context"
"fmt"
"sort"
"strconv"
"strings"
"time"
"github.com/xuri/excelize/v2"
"go.uploadedlobster.com/mbtypes"
"go.uploadedlobster.com/scotty/internal/config"
"go.uploadedlobster.com/scotty/internal/i18n"
"go.uploadedlobster.com/scotty/internal/models"
)
const (
sheetListeningHistory = "10_listeningHistory"
sheetFavoriteSongs = "8_favoriteSong"
)
type DeezerHistoryBackend struct {
filePath string
}
func (b *DeezerHistoryBackend) Name() string { return "deezer-history" }
func (b *DeezerHistoryBackend) Close() {}
func (b *DeezerHistoryBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "file-path",
Label: i18n.Tr("File path"),
Type: models.String,
Default: "",
}}
}
func (b *DeezerHistoryBackend) InitConfig(config *config.ServiceConfig) error {
b.filePath = config.GetString("file-path")
return nil
}
func (b *DeezerHistoryBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
p := models.TransferProgress{
Export: &models.Progress{},
}
rows, err := ReadXLSXSheet(b.filePath, sheetListeningHistory)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
count := len(rows) - 1 // Exclude the header row
p.Export.TotalItems = count
p.Export.Total = int64(count)
listens := make(models.ListensList, 0, count)
for i, row := range models.IterExportProgress(rows, &p, progress) {
// Skip header row
if i == 0 {
continue
}
l, err := RowAsListen(row)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
listens = append(listens, *l)
}
sort.Sort(listens)
results <- models.ListensResult{Items: listens}
p.Export.Complete()
progress <- p
}
func (b *DeezerHistoryBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
p := models.TransferProgress{
Export: &models.Progress{},
}
rows, err := ReadXLSXSheet(b.filePath, sheetFavoriteSongs)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
count := len(rows) - 1 // Exclude the header row
p.Export.TotalItems = count
p.Export.Total = int64(count)
love := make(models.LovesList, 0, count)
for i, row := range models.IterExportProgress(rows, &p, progress) {
// Skip header row
if i == 0 {
continue
}
l, err := RowAsLove(row)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
love = append(love, *l)
}
sort.Sort(love)
results <- models.LovesResult{Items: love}
p.Export.Complete()
progress <- p
}
func ReadXLSXSheet(path string, sheet string) ([][]string, error) {
exc, err := excelize.OpenFile(path)
if err != nil {
return nil, err
}
// Get all the rows in the Sheet1.
return exc.GetRows(sheet)
}
func RowAsListen(row []string) (*models.Listen, error) {
if len(row) < 9 {
err := fmt.Errorf("Invalid row, expected 9 columns, got %d", len(row))
return nil, err
}
listenedAt, err := time.Parse(time.DateTime, row[8])
if err != nil {
return nil, err
}
listen := models.Listen{
ListenedAt: listenedAt,
Track: models.Track{
TrackName: row[0],
ArtistNames: []string{row[1]},
ReleaseName: row[3],
ISRC: mbtypes.ISRC(row[2]),
AdditionalInfo: map[string]any{
"music_service": "deezer.com",
},
},
}
if duration, err := strconv.Atoi(row[5]); err == nil {
listen.PlaybackDuration = time.Duration(duration) * time.Second
}
return &listen, nil
}
func RowAsLove(row []string) (*models.Love, error) {
if len(row) < 5 {
err := fmt.Errorf("Invalid row, expected 5 columns, got %d", len(row))
return nil, err
}
url := row[4]
if !strings.HasPrefix(url, "http://") || !strings.HasPrefix(url, "https") {
url = "https://" + url
}
love := models.Love{
Track: models.Track{
TrackName: row[0],
ArtistNames: []string{row[1]},
ReleaseName: row[2],
ISRC: mbtypes.ISRC(row[3]),
AdditionalInfo: map[string]any{
"music_service": "deezer.com",
"origin_url": url,
"deezer_id": url,
},
},
}
return &love, nil
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -17,46 +17,119 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package dump
import (
"bytes"
"context"
"fmt"
"io"
"os"
"strings"
"go.uploadedlobster.com/scotty/internal/config"
"go.uploadedlobster.com/scotty/internal/i18n"
"go.uploadedlobster.com/scotty/internal/models"
)
type DumpBackend struct{}
type DumpBackend struct {
buffer io.ReadWriter
print bool // Whether to print the output to stdout
}
func (b *DumpBackend) Name() string { return "dump" }
func (b *DumpBackend) Options() []models.BackendOption { return nil }
func (b *DumpBackend) Close() {}
func (b *DumpBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "file-path",
Label: i18n.Tr("File path"),
Type: models.String,
}, {
Name: "append",
Label: i18n.Tr("Append to file"),
Type: models.Bool,
Default: "true",
}}
}
func (b *DumpBackend) InitConfig(config *config.ServiceConfig) error {
filePath := config.GetString("file-path")
append := config.GetBool("append", true)
if strings.TrimSpace(filePath) != "" {
mode := os.O_WRONLY | os.O_CREATE
if !append {
mode |= os.O_TRUNC // Truncate the file if not appending
}
f, err := os.OpenFile(filePath, mode, 0644)
if err != nil {
return err
}
b.buffer = f
b.print = false // If a file path is specified, we don't print to stdout
} else {
// If no file path is specified, use a bytes.Buffer for in-memory dumping
b.buffer = new(bytes.Buffer)
b.print = true // Print to stdout
}
return nil
}
func (b *DumpBackend) StartImport() error { return nil }
func (b *DumpBackend) FinishImport() error { return nil }
func (b *DumpBackend) ImportListens(export models.ListensResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *DumpBackend) FinishImport(result *models.ImportResult) error {
if b.print {
out := new(strings.Builder)
_, err := io.Copy(out, b.buffer)
if err != nil {
return err
}
if result != nil {
result.Log(models.Output, out.String())
}
}
// Close the io writer if it is closable
if closer, ok := b.buffer.(io.Closer); ok {
if err := closer.Close(); err != nil {
return fmt.Errorf("failed to close output file: %w", err)
}
}
return nil
}
func (b *DumpBackend) ImportListens(ctx context.Context, export models.ListensResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
for _, listen := range export.Items {
if err := ctx.Err(); err != nil {
return importResult, err
}
importResult.UpdateTimestamp(listen.ListenedAt)
importResult.ImportCount += 1
msg := fmt.Sprintf("🎶 %v: \"%v\" by %v (%v)",
_, err := fmt.Fprintf(b.buffer, "🎶 %v: \"%v\" by %v (%v)\n",
listen.ListenedAt, listen.TrackName, listen.ArtistName(), listen.RecordingMBID)
importResult.Log(models.Info, msg)
progress <- models.Progress{}.FromImportResult(importResult)
if err != nil {
return importResult, err
}
progress <- models.TransferProgress{}.FromImportResult(importResult, false)
}
return importResult, nil
}
func (b *DumpBackend) ImportLoves(export models.LovesResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *DumpBackend) ImportLoves(ctx context.Context, export models.LovesResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
for _, love := range export.Items {
if err := ctx.Err(); err != nil {
return importResult, err
}
importResult.UpdateTimestamp(love.Created)
importResult.ImportCount += 1
msg := fmt.Sprintf("❤️ %v: \"%v\" by %v (%v)",
_, err := fmt.Fprintf(b.buffer, "❤️ %v: \"%v\" by %v (%v)\n",
love.Created, love.TrackName, love.ArtistName(), love.RecordingMBID)
importResult.Log(models.Info, msg)
progress <- models.Progress{}.FromImportResult(importResult)
if err != nil {
return importResult, err
}
progress <- models.TransferProgress{}.FromImportResult(importResult, false)
}
return importResult, nil

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
@ -16,6 +16,8 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package backends
import (
"context"
"sync"
"time"
"go.uploadedlobster.com/scotty/internal/models"
@ -23,7 +25,7 @@ import (
type ExportProcessor[T models.ListensResult | models.LovesResult] interface {
ExportBackend() models.Backend
Process(oldestTimestamp time.Time, results chan T, progress chan models.Progress)
Process(ctx context.Context, wg *sync.WaitGroup, oldestTimestamp time.Time, results chan T, progress chan models.TransferProgress)
}
type ListensExportProcessor struct {
@ -34,9 +36,11 @@ func (p ListensExportProcessor) ExportBackend() models.Backend {
return p.Backend
}
func (p ListensExportProcessor) Process(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
p.Backend.ExportListens(oldestTimestamp, results, progress)
close(progress)
func (p ListensExportProcessor) Process(ctx context.Context, wg *sync.WaitGroup, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
wg.Add(1)
defer wg.Done()
defer close(results)
p.Backend.ExportListens(ctx, oldestTimestamp, results, progress)
}
type LovesExportProcessor struct {
@ -47,7 +51,9 @@ func (p LovesExportProcessor) ExportBackend() models.Backend {
return p.Backend
}
func (p LovesExportProcessor) Process(oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.Progress) {
p.Backend.ExportLoves(oldestTimestamp, results, progress)
close(progress)
func (p LovesExportProcessor) Process(ctx context.Context, wg *sync.WaitGroup, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
wg.Add(1)
defer wg.Done()
defer close(results)
p.Backend.ExportLoves(ctx, oldestTimestamp, results, progress)
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,6 +22,7 @@ THE SOFTWARE.
package funkwhale
import (
"context"
"errors"
"strconv"
@ -54,15 +55,10 @@ func NewClient(serverURL string, token string) Client {
}
}
func (c Client) GetHistoryListenings(user string, page int, perPage int) (result ListeningsResult, err error) {
func (c Client) GetHistoryListenings(ctx context.Context, user string, page int, perPage int) (result ListeningsResult, err error) {
const path = "/api/v1/history/listenings"
response, err := c.HTTPClient.R().
SetQueryParams(map[string]string{
"username": user,
"page": strconv.Itoa(page),
"page_size": strconv.Itoa(perPage),
"ordering": "-creation_date",
}).
response, err := c.buildListRequest(ctx, page, perPage).
SetQueryParam("username", user).
SetResult(&result).
Get(path)
@ -73,14 +69,9 @@ func (c Client) GetHistoryListenings(user string, page int, perPage int) (result
return
}
func (c Client) GetFavoriteTracks(page int, perPage int) (result FavoriteTracksResult, err error) {
func (c Client) GetFavoriteTracks(ctx context.Context, page int, perPage int) (result FavoriteTracksResult, err error) {
const path = "/api/v1/favorites/tracks"
response, err := c.HTTPClient.R().
SetQueryParams(map[string]string{
"page": strconv.Itoa(page),
"page_size": strconv.Itoa(perPage),
"ordering": "-creation_date",
}).
response, err := c.buildListRequest(ctx, page, perPage).
SetResult(&result).
Get(path)
@ -90,3 +81,13 @@ func (c Client) GetFavoriteTracks(page int, perPage int) (result FavoriteTracksR
}
return
}
func (c Client) buildListRequest(ctx context.Context, page int, perPage int) *resty.Request {
return c.HTTPClient.R().
SetContext(ctx).
SetQueryParams(map[string]string{
"page": strconv.Itoa(page),
"page_size": strconv.Itoa(perPage),
"ordering": "-creation_date",
})
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,6 +22,7 @@ THE SOFTWARE.
package funkwhale_test
import (
"context"
"net/http"
"testing"
@ -49,7 +50,8 @@ func TestGetHistoryListenings(t *testing.T) {
"https://funkwhale.example.com/api/v1/history/listenings",
"testdata/listenings.json")
result, err := client.GetHistoryListenings("outsidecontext", 0, 2)
ctx := context.Background()
result, err := client.GetHistoryListenings(ctx, "outsidecontext", 0, 2)
require.NoError(t, err)
assert := assert.New(t)
@ -73,7 +75,8 @@ func TestGetFavoriteTracks(t *testing.T) {
"https://funkwhale.example.com/api/v1/favorites/tracks",
"testdata/favorite-tracks.json")
result, err := client.GetFavoriteTracks(0, 2)
ctx := context.Background()
result, err := client.GetFavoriteTracks(ctx, 0, 2)
require.NoError(t, err)
assert := assert.New(t)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -17,6 +17,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package funkwhale
import (
"context"
"sort"
"time"
@ -35,6 +36,8 @@ type FunkwhaleApiBackend struct {
func (b *FunkwhaleApiBackend) Name() string { return "funkwhale" }
func (b *FunkwhaleApiBackend) Close() {}
func (b *FunkwhaleApiBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "server-url",
@ -60,21 +63,26 @@ func (b *FunkwhaleApiBackend) InitConfig(config *config.ServiceConfig) error {
return nil
}
func (b *FunkwhaleApiBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
func (b *FunkwhaleApiBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
page := 1
perPage := MaxItemsPerGet
defer close(results)
// We need to gather the full list of listens in order to sort them
listens := make(models.ListensList, 0, 2*perPage)
p := models.Progress{Total: int64(perPage)}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(perPage),
},
}
out:
for {
result, err := b.client.GetHistoryListenings(b.username, page, perPage)
result, err := b.client.GetHistoryListenings(ctx, b.username, page, perPage)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
count := len(result.Results)
@ -85,7 +93,7 @@ out:
for _, fwListen := range result.Results {
listen := fwListen.AsListen()
if listen.ListenedAt.After(oldestTimestamp) {
p.Elapsed += 1
p.Export.Elapsed += 1
listens = append(listens, listen)
} else {
break out
@ -94,36 +102,42 @@ out:
if result.Next == "" {
// No further results
p.Total = p.Elapsed
p.Total -= int64(perPage - count)
p.Export.Total = p.Export.Elapsed
p.Export.Total -= int64(perPage - count)
break out
}
p.Total += int64(perPage)
p.Export.TotalItems = len(listens)
p.Export.Total += int64(perPage)
progress <- p
page += 1
}
sort.Sort(listens)
progress <- p.Complete()
p.Export.TotalItems = len(listens)
p.Export.Complete()
progress <- p
results <- models.ListensResult{Items: listens}
}
func (b *FunkwhaleApiBackend) ExportLoves(oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.Progress) {
func (b *FunkwhaleApiBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
page := 1
perPage := MaxItemsPerGet
defer close(results)
// We need to gather the full list of listens in order to sort them
loves := make(models.LovesList, 0, 2*perPage)
p := models.Progress{Total: int64(perPage)}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(perPage),
},
}
out:
for {
result, err := b.client.GetFavoriteTracks(page, perPage)
result, err := b.client.GetFavoriteTracks(ctx, page, perPage)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
@ -136,7 +150,7 @@ out:
for _, favorite := range result.Results {
love := favorite.AsLove()
if love.Created.After(oldestTimestamp) {
p.Elapsed += 1
p.Export.Elapsed += 1
loves = append(loves, love)
} else {
break out
@ -148,13 +162,16 @@ out:
break out
}
p.Total += int64(perPage)
p.Export.TotalItems = len(loves)
p.Export.Total += int64(perPage)
progress <- p
page += 1
}
sort.Sort(loves)
progress <- p.Complete()
p.Export.TotalItems = len(loves)
p.Export.Complete()
progress <- p
results <- models.LovesResult{Items: loves}
}
@ -205,7 +222,7 @@ func (t Track) AsTrack() models.Track {
}
if len(t.Uploads) > 0 {
track.Duration = time.Duration(t.Uploads[0].Duration * int(time.Second))
track.Duration = time.Duration(t.Uploads[0].Duration) * time.Second
}
return track

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -18,13 +18,16 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package backends
import (
"context"
"sync"
"go.uploadedlobster.com/scotty/internal/models"
)
type ImportProcessor[T models.ListensResult | models.LovesResult] interface {
ImportBackend() models.ImportBackend
Process(results chan T, out chan models.ImportResult, progress chan models.Progress)
Import(export T, result models.ImportResult, out chan models.ImportResult, progress chan models.Progress) (models.ImportResult, error)
Process(ctx context.Context, wg *sync.WaitGroup, results chan T, out chan models.ImportResult, progress chan models.TransferProgress)
Import(ctx context.Context, export T, result models.ImportResult, out chan models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error)
}
type ListensImportProcessor struct {
@ -35,13 +38,13 @@ func (p ListensImportProcessor) ImportBackend() models.ImportBackend {
return p.Backend
}
func (p ListensImportProcessor) Process(results chan models.ListensResult, out chan models.ImportResult, progress chan models.Progress) {
process(p, results, out, progress)
func (p ListensImportProcessor) Process(ctx context.Context, wg *sync.WaitGroup, results chan models.ListensResult, out chan models.ImportResult, progress chan models.TransferProgress) {
process(ctx, wg, p, results, out, progress)
}
func (p ListensImportProcessor) Import(export models.ListensResult, result models.ImportResult, out chan models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (p ListensImportProcessor) Import(ctx context.Context, export models.ListensResult, result models.ImportResult, out chan models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
if export.Error != nil {
return handleError(result, export.Error, progress), export.Error
return result, export.Error
}
if export.Total > 0 {
@ -49,9 +52,9 @@ func (p ListensImportProcessor) Import(export models.ListensResult, result model
} else {
result.TotalCount += len(export.Items)
}
importResult, err := p.Backend.ImportListens(export, result, progress)
importResult, err := p.Backend.ImportListens(ctx, export, result, progress)
if err != nil {
return handleError(result, err, progress), err
return importResult, err
}
return importResult, nil
}
@ -64,13 +67,13 @@ func (p LovesImportProcessor) ImportBackend() models.ImportBackend {
return p.Backend
}
func (p LovesImportProcessor) Process(results chan models.LovesResult, out chan models.ImportResult, progress chan models.Progress) {
process(p, results, out, progress)
func (p LovesImportProcessor) Process(ctx context.Context, wg *sync.WaitGroup, results chan models.LovesResult, out chan models.ImportResult, progress chan models.TransferProgress) {
process(ctx, wg, p, results, out, progress)
}
func (p LovesImportProcessor) Import(export models.LovesResult, result models.ImportResult, out chan models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (p LovesImportProcessor) Import(ctx context.Context, export models.LovesResult, result models.ImportResult, out chan models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
if export.Error != nil {
return handleError(result, export.Error, progress), export.Error
return result, export.Error
}
if export.Total > 0 {
@ -78,46 +81,61 @@ func (p LovesImportProcessor) Import(export models.LovesResult, result models.Im
} else {
result.TotalCount += len(export.Items)
}
importResult, err := p.Backend.ImportLoves(export, result, progress)
importResult, err := p.Backend.ImportLoves(ctx, export, result, progress)
if err != nil {
return handleError(importResult, err, progress), err
return importResult, err
}
return importResult, nil
}
func process[R models.LovesResult | models.ListensResult, P ImportProcessor[R]](processor P, results chan R, out chan models.ImportResult, progress chan models.Progress) {
func process[R models.LovesResult | models.ListensResult, P ImportProcessor[R]](
ctx context.Context, wg *sync.WaitGroup,
processor P, results chan R,
out chan models.ImportResult,
progress chan models.TransferProgress,
) {
wg.Add(1)
defer wg.Done()
defer close(out)
defer close(progress)
result := models.ImportResult{}
p := models.TransferProgress{}
err := processor.ImportBackend().StartImport()
if err != nil {
if err := processor.ImportBackend().StartImport(); err != nil {
out <- handleError(result, err, progress)
return
}
for exportResult := range results {
importResult, err := processor.Import(exportResult, result, out, progress)
if err != nil {
out <- handleError(result, err, progress)
return
}
result.Update(importResult)
progress <- models.Progress{}.FromImportResult(result)
}
err = processor.ImportBackend().FinishImport()
if err != nil {
if err := ctx.Err(); err != nil {
processor.ImportBackend().FinishImport(&result)
out <- handleError(result, err, progress)
return
}
progress <- models.Progress{}.FromImportResult(result).Complete()
importResult, err := processor.Import(
ctx, exportResult, result.Copy(), out, progress)
result.Update(&importResult)
if err != nil {
processor.ImportBackend().FinishImport(&result)
out <- handleError(result, err, progress)
return
}
progress <- p.FromImportResult(result, false)
}
if err := processor.ImportBackend().FinishImport(&result); err != nil {
out <- handleError(result, err, progress)
return
}
progress <- p.FromImportResult(result, true)
out <- result
}
func handleError(result models.ImportResult, err error, progress chan models.Progress) models.ImportResult {
func handleError(result models.ImportResult, err error, progress chan models.TransferProgress) models.ImportResult {
result.Error = err
progress <- models.Progress{}.FromImportResult(result).Complete()
p := models.TransferProgress{}.FromImportResult(result, false)
p.Import.Abort()
progress <- p
return result
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023-2024 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -18,15 +18,26 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package jspf
import (
"context"
"errors"
"os"
"sort"
"strings"
"time"
"go.uploadedlobster.com/mbtypes"
"go.uploadedlobster.com/scotty/internal/config"
"go.uploadedlobster.com/scotty/internal/i18n"
"go.uploadedlobster.com/scotty/internal/models"
"go.uploadedlobster.com/scotty/pkg/jspf"
)
const (
artistMBIDPrefix = "https://musicbrainz.org/artist/"
recordingMBIDPrefix = "https://musicbrainz.org/recording/"
releaseMBIDPrefix = "https://musicbrainz.org/release/"
)
type JSPFBackend struct {
filePath string
playlist jspf.Playlist
@ -35,6 +46,8 @@ type JSPFBackend struct {
func (b *JSPFBackend) Name() string { return "jspf" }
func (b *JSPFBackend) Close() {}
func (b *JSPFBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "file-path",
@ -67,14 +80,11 @@ func (b *JSPFBackend) InitConfig(config *config.ServiceConfig) error {
Title: config.GetString("title"),
Creator: config.GetString("username"),
Identifier: config.GetString("identifier"),
Date: time.Now(),
Tracks: make([]jspf.Track, 0),
Extension: map[string]any{
jspf.MusicBrainzPlaylistExtensionID: jspf.MusicBrainzPlaylistExtension{
LastModifiedAt: time.Now(),
Public: true,
},
},
}
b.addMusicBrainzPlaylistExtension()
return nil
}
@ -82,52 +92,128 @@ func (b *JSPFBackend) StartImport() error {
return b.readJSPF()
}
func (b *JSPFBackend) FinishImport() error {
func (b *JSPFBackend) FinishImport(result *models.ImportResult) error {
return b.writeJSPF()
}
func (b *JSPFBackend) ImportListens(export models.ListensResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
for _, listen := range export.Items {
func (b *JSPFBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
err := b.readJSPF()
p := models.TransferProgress{
Export: &models.Progress{},
}
if err != nil {
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
listens := make(models.ListensList, 0, len(b.playlist.Tracks))
p.Export.Total = int64(len(b.playlist.Tracks))
for _, track := range models.IterExportProgress(b.playlist.Tracks, &p, progress) {
listen, err := trackAsListen(track)
if err == nil && listen != nil && listen.ListenedAt.After(oldestTimestamp) {
listens = append(listens, *listen)
p.Export.TotalItems += 1
}
}
sort.Sort(listens)
results <- models.ListensResult{Items: listens}
}
func (b *JSPFBackend) ImportListens(ctx context.Context, export models.ListensResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
p := models.TransferProgress{}.FromImportResult(importResult, false)
for _, listen := range models.IterImportProgress(export.Items, &p, progress) {
if err := ctx.Err(); err != nil {
return importResult, err
}
track := listenAsTrack(listen)
b.playlist.Tracks = append(b.playlist.Tracks, track)
importResult.ImportCount += 1
importResult.UpdateTimestamp(listen.ListenedAt)
}
progress <- models.Progress{}.FromImportResult(importResult)
return importResult, nil
}
func (b *JSPFBackend) ImportLoves(export models.LovesResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
for _, love := range export.Items {
func (b *JSPFBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
err := b.readJSPF()
p := models.TransferProgress{
Export: &models.Progress{},
}
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
loves := make(models.LovesList, 0, len(b.playlist.Tracks))
p.Export.Total = int64(len(b.playlist.Tracks))
for _, track := range models.IterExportProgress(b.playlist.Tracks, &p, progress) {
love, err := trackAsLove(track)
if err == nil && love != nil && love.Created.After(oldestTimestamp) {
loves = append(loves, *love)
p.Export.TotalItems += 1
}
}
sort.Sort(loves)
results <- models.LovesResult{Items: loves}
}
func (b *JSPFBackend) ImportLoves(ctx context.Context, export models.LovesResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
p := models.TransferProgress{}.FromImportResult(importResult, false)
for _, love := range models.IterImportProgress(export.Items, &p, progress) {
if err := ctx.Err(); err != nil {
return importResult, err
}
track := loveAsTrack(love)
b.playlist.Tracks = append(b.playlist.Tracks, track)
importResult.ImportCount += 1
importResult.UpdateTimestamp(love.Created)
}
progress <- models.Progress{}.FromImportResult(importResult)
return importResult, nil
}
func listenAsTrack(l models.Listen) jspf.Track {
l.FillAdditionalInfo()
track := trackAsTrack(l.Track)
track := trackAsJSPFTrack(l.Track)
extension := makeMusicBrainzExtension(l.Track)
extension.AddedAt = l.ListenedAt
extension.AddedBy = l.UserName
track.Extension[jspf.MusicBrainzTrackExtensionID] = extension
if l.RecordingMBID != "" {
track.Identifier = append(track.Identifier, "https://musicbrainz.org/recording/"+string(l.RecordingMBID))
track.Identifier = append(track.Identifier, recordingMBIDPrefix+string(l.RecordingMBID))
}
return track
}
func trackAsListen(t jspf.Track) (*models.Listen, error) {
track, ext, err := jspfTrackAsTrack(t)
if err != nil {
return nil, err
}
listen := models.Listen{
ListenedAt: ext.AddedAt,
UserName: ext.AddedBy,
Track: *track,
}
return &listen, err
}
func loveAsTrack(l models.Love) jspf.Track {
l.FillAdditionalInfo()
track := trackAsTrack(l.Track)
track := trackAsJSPFTrack(l.Track)
extension := makeMusicBrainzExtension(l.Track)
extension.AddedAt = l.Created
extension.AddedBy = l.UserName
@ -138,24 +224,69 @@ func loveAsTrack(l models.Love) jspf.Track {
recordingMBID = l.RecordingMBID
}
if recordingMBID != "" {
track.Identifier = append(track.Identifier, "https://musicbrainz.org/recording/"+string(recordingMBID))
track.Identifier = append(track.Identifier, recordingMBIDPrefix+string(recordingMBID))
}
return track
}
func trackAsTrack(t models.Track) jspf.Track {
func trackAsLove(t jspf.Track) (*models.Love, error) {
track, ext, err := jspfTrackAsTrack(t)
if err != nil {
return nil, err
}
love := models.Love{
Created: ext.AddedAt,
UserName: ext.AddedBy,
RecordingMBID: track.RecordingMBID,
Track: *track,
}
recordingMSID, ok := track.AdditionalInfo["recording_msid"].(string)
if ok {
love.RecordingMSID = mbtypes.MBID(recordingMSID)
}
return &love, err
}
func trackAsJSPFTrack(t models.Track) jspf.Track {
track := jspf.Track{
Title: t.TrackName,
Album: t.ReleaseName,
Creator: t.ArtistName(),
TrackNum: t.TrackNumber,
Extension: map[string]any{},
Duration: t.Duration.Milliseconds(),
Extension: jspf.ExtensionMap{},
}
return track
}
func jspfTrackAsTrack(t jspf.Track) (*models.Track, *jspf.MusicBrainzTrackExtension, error) {
track := models.Track{
ArtistNames: []string{t.Creator},
ReleaseName: t.Album,
TrackName: t.Title,
TrackNumber: t.TrackNum,
Duration: time.Duration(t.Duration) * time.Millisecond,
}
for _, id := range t.Identifier {
if strings.HasPrefix(id, recordingMBIDPrefix) {
track.RecordingMBID = mbtypes.MBID(id[len(recordingMBIDPrefix):])
}
}
ext, err := readMusicBrainzExtension(t, &track)
if err != nil {
return nil, nil, err
}
return &track, ext, nil
}
func makeMusicBrainzExtension(t models.Track) jspf.MusicBrainzTrackExtension {
extension := jspf.MusicBrainzTrackExtension{
AdditionalMetadata: t.AdditionalInfo,
@ -163,11 +294,11 @@ func makeMusicBrainzExtension(t models.Track) jspf.MusicBrainzTrackExtension {
}
for i, mbid := range t.ArtistMBIDs {
extension.ArtistIdentifiers[i] = "https://musicbrainz.org/artist/" + string(mbid)
extension.ArtistIdentifiers[i] = artistMBIDPrefix + string(mbid)
}
if t.ReleaseMBID != "" {
extension.ReleaseIdentifier = "https://musicbrainz.org/release/" + string(t.ReleaseMBID)
extension.ReleaseIdentifier = releaseMBIDPrefix + string(t.ReleaseMBID)
}
// The tracknumber tag would be redundant
@ -176,6 +307,25 @@ func makeMusicBrainzExtension(t models.Track) jspf.MusicBrainzTrackExtension {
return extension
}
func readMusicBrainzExtension(jspfTrack jspf.Track, outputTrack *models.Track) (*jspf.MusicBrainzTrackExtension, error) {
ext := jspf.MusicBrainzTrackExtension{}
err := jspfTrack.Extension.Get(jspf.MusicBrainzTrackExtensionID, &ext)
if err != nil {
return nil, errors.New("missing MusicBrainz track extension")
}
outputTrack.AdditionalInfo = ext.AdditionalMetadata
outputTrack.ReleaseMBID = mbtypes.MBID(ext.ReleaseIdentifier)
outputTrack.ArtistMBIDs = make([]mbtypes.MBID, len(ext.ArtistIdentifiers))
for i, mbid := range ext.ArtistIdentifiers {
if strings.HasPrefix(mbid, artistMBIDPrefix) {
outputTrack.ArtistMBIDs[i] = mbtypes.MBID(mbid[len(artistMBIDPrefix):])
}
}
return &ext, nil
}
func (b *JSPFBackend) readJSPF() error {
if b.append {
file, err := os.Open(b.filePath)
@ -199,6 +349,7 @@ func (b *JSPFBackend) readJSPF() error {
return err
}
b.playlist = playlist.Playlist
b.addMusicBrainzPlaylistExtension()
}
}
@ -218,3 +369,13 @@ func (b *JSPFBackend) writeJSPF() error {
defer file.Close()
return playlist.Write(file)
}
func (b *JSPFBackend) addMusicBrainzPlaylistExtension() {
if b.playlist.Extension == nil {
b.playlist.Extension = make(jspf.ExtensionMap, 1)
}
extension := jspf.MusicBrainzPlaylistExtension{Public: true}
b.playlist.Extension.Get(jspf.MusicBrainzPlaylistExtensionID, &extension)
extension.LastModifiedAt = time.Now()
b.playlist.Extension[jspf.MusicBrainzPlaylistExtensionID] = extension
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
@ -16,6 +16,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package lastfm
import (
"context"
"fmt"
"net/url"
"sort"
@ -45,6 +46,8 @@ type LastfmApiBackend struct {
func (b *LastfmApiBackend) Name() string { return "lastfm" }
func (b *LastfmApiBackend) Close() {}
func (b *LastfmApiBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "username",
@ -70,7 +73,9 @@ func (b *LastfmApiBackend) InitConfig(config *config.ServiceConfig) error {
}
func (b *LastfmApiBackend) StartImport() error { return nil }
func (b *LastfmApiBackend) FinishImport() error { return nil }
func (b *LastfmApiBackend) FinishImport(result *models.ImportResult) error {
return nil
}
func (b *LastfmApiBackend) OAuth2Strategy(redirectURL *url.URL) auth.OAuth2Strategy {
return lastfmStrategy{
@ -88,18 +93,27 @@ func (b *LastfmApiBackend) OAuth2Setup(token oauth2.TokenSource) error {
return nil
}
func (b *LastfmApiBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
func (b *LastfmApiBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
page := MaxPage
minTime := oldestTimestamp
perPage := MaxItemsPerGet
defer close(results)
// We need to gather the full list of listens in order to sort them
p := models.Progress{Total: int64(page)}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(page),
},
}
out:
for page > 0 {
if err := ctx.Err(); err != nil {
results <- models.ListensResult{Error: err}
p.Export.Abort()
progress <- p
return
}
args := lastfm.P{
"user": b.username,
"limit": MaxListensPerGet,
@ -110,7 +124,8 @@ out:
result, err := b.client.User.GetRecentTracks(args)
if err != nil {
results <- models.ListensResult{Error: err}
progress <- p.Complete()
p.Export.Abort()
progress <- p
return
}
@ -129,11 +144,12 @@ out:
timestamp, err := strconv.ParseInt(scrobble.Date.Uts, 10, 64)
if err != nil {
results <- models.ListensResult{Error: err}
progress <- p.Complete()
p.Export.Abort()
progress <- p
break out
}
if timestamp > oldestTimestamp.Unix() {
p.Elapsed += 1
p.Export.Elapsed += 1
listen := models.Listen{
ListenedAt: time.Unix(timestamp, 0),
UserName: b.username,
@ -167,18 +183,24 @@ out:
Total: result.Total,
OldestTimestamp: minTime,
}
p.Total = int64(result.TotalPages)
p.Elapsed = int64(result.TotalPages - page)
p.Export.Total = int64(result.TotalPages)
p.Export.Elapsed = int64(result.TotalPages - page)
p.Export.TotalItems += len(listens)
progress <- p
}
results <- models.ListensResult{OldestTimestamp: minTime}
progress <- p.Complete()
p.Export.Complete()
progress <- p
}
func (b *LastfmApiBackend) ImportListens(export models.ListensResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *LastfmApiBackend) ImportListens(ctx context.Context, export models.ListensResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
total := len(export.Items)
for i := 0; i < total; i += MaxListensPerSubmission {
if err := ctx.Err(); err != nil {
return importResult, err
}
listens := export.Items[i:min(i+MaxListensPerSubmission, total)]
count := len(listens)
if count == 0 {
@ -246,38 +268,47 @@ func (b *LastfmApiBackend) ImportListens(export models.ListensResult, importResu
importResult.UpdateTimestamp(listens[count-1].ListenedAt)
importResult.ImportCount += accepted
progress <- models.Progress{}.FromImportResult(importResult)
progress <- models.TransferProgress{}.FromImportResult(importResult, false)
}
return importResult, nil
}
func (b *LastfmApiBackend) ExportLoves(oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.Progress) {
func (b *LastfmApiBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
// Choose a high offset, we attempt to search the loves backwards starting
// at the oldest one.
page := 1
perPage := MaxItemsPerGet
defer close(results)
loves := make(models.LovesList, 0, 2*MaxItemsPerGet)
p := models.Progress{Total: int64(perPage)}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(perPage),
},
}
var totalCount int
out:
for {
if err := ctx.Err(); err != nil {
results <- models.LovesResult{Error: err}
p.Export.Abort()
progress <- p
return
}
result, err := b.client.User.GetLovedTracks(lastfm.P{
"user": b.username,
"limit": MaxItemsPerGet,
"page": page,
})
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
p.Total = int64(result.Total)
count := len(result.Tracks)
if count == 0 {
break out
@ -286,7 +317,8 @@ out:
for _, track := range result.Tracks {
timestamp, err := strconv.ParseInt(track.Date.Uts, 10, 64)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
@ -312,19 +344,26 @@ out:
}
}
p.Elapsed += int64(count)
p.Export.Total += int64(perPage)
p.Export.TotalItems = totalCount
p.Export.Elapsed += int64(count)
progress <- p
page += 1
}
sort.Sort(loves)
p.Export.Complete()
progress <- p
results <- models.LovesResult{Items: loves, Total: totalCount}
progress <- p.Complete()
}
func (b *LastfmApiBackend) ImportLoves(export models.LovesResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *LastfmApiBackend) ImportLoves(ctx context.Context, export models.LovesResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
for _, love := range export.Items {
if err := ctx.Err(); err != nil {
return importResult, err
}
err := b.client.Track.Love(lastfm.P{
"track": love.TrackName,
"artist": love.ArtistName(),
@ -339,7 +378,7 @@ func (b *LastfmApiBackend) ImportLoves(export models.LovesResult, importResult m
importResult.Log(models.Error, msg)
}
progress <- models.Progress{}.FromImportResult(importResult)
progress <- models.TransferProgress{}.FromImportResult(importResult, false)
}
return importResult, nil

View file

@ -0,0 +1,224 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
package lbarchive
import (
"context"
"time"
"go.uploadedlobster.com/musicbrainzws2"
lbapi "go.uploadedlobster.com/scotty/internal/backends/listenbrainz"
"go.uploadedlobster.com/scotty/internal/config"
"go.uploadedlobster.com/scotty/internal/i18n"
"go.uploadedlobster.com/scotty/internal/listenbrainz"
"go.uploadedlobster.com/scotty/internal/models"
"go.uploadedlobster.com/scotty/internal/version"
)
const (
listensBatchSize = 2000
lovesBatchSize = listenbrainz.MaxItemsPerGet
)
type ListenBrainzArchiveBackend struct {
filePath string
lbClient listenbrainz.Client
mbClient *musicbrainzws2.Client
}
func (b *ListenBrainzArchiveBackend) Name() string { return "listenbrainz-archive" }
func (b *ListenBrainzArchiveBackend) Close() {
if b.mbClient != nil {
b.mbClient.Close()
}
}
func (b *ListenBrainzArchiveBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "archive-path",
Label: i18n.Tr("Archive path"),
Type: models.String,
}}
}
func (b *ListenBrainzArchiveBackend) InitConfig(config *config.ServiceConfig) error {
b.filePath = config.GetString("archive-path")
b.lbClient = listenbrainz.NewClient("", version.UserAgent())
b.mbClient = musicbrainzws2.NewClient(musicbrainzws2.AppInfo{
Name: version.AppName,
Version: version.AppVersion,
URL: version.AppURL,
})
return nil
}
func (b *ListenBrainzArchiveBackend) ExportListens(
ctx context.Context, oldestTimestamp time.Time,
results chan models.ListensResult, progress chan models.TransferProgress) {
startTime := time.Now()
minTime := oldestTimestamp
if minTime.Unix() < 1 {
minTime = time.Unix(1, 0)
}
totalDuration := startTime.Sub(oldestTimestamp)
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(totalDuration.Seconds()),
},
}
archive, err := listenbrainz.OpenExportArchive(b.filePath)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
defer archive.Close()
userInfo, err := archive.UserInfo()
if err != nil {
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
listens := make(models.ListensList, 0, listensBatchSize)
for rawListen, err := range archive.IterListens(oldestTimestamp) {
if err != nil {
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
listen := lbapi.AsListen(rawListen)
if listen.UserName == "" {
listen.UserName = userInfo.Name
}
listens = append(listens, listen)
// Update the progress
p.Export.TotalItems += 1
remainingTime := startTime.Sub(listen.ListenedAt)
p.Export.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
// Allow the importer to start processing the listens by
// sending them in batches.
if len(listens) >= listensBatchSize {
results <- models.ListensResult{Items: listens}
progress <- p
listens = listens[:0]
}
}
results <- models.ListensResult{Items: listens}
p.Export.Complete()
progress <- p
}
func (b *ListenBrainzArchiveBackend) ExportLoves(
ctx context.Context, oldestTimestamp time.Time,
results chan models.LovesResult, progress chan models.TransferProgress) {
startTime := time.Now()
minTime := oldestTimestamp
if minTime.Unix() < 1 {
minTime = time.Unix(1, 0)
}
totalDuration := startTime.Sub(oldestTimestamp)
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(totalDuration.Seconds()),
},
}
archive, err := listenbrainz.OpenExportArchive(b.filePath)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
defer archive.Close()
userInfo, err := archive.UserInfo()
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
batch := make([]listenbrainz.Feedback, 0, lovesBatchSize)
for feedback, err := range archive.IterFeedback(oldestTimestamp) {
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
if feedback.UserName == "" {
feedback.UserName = userInfo.Name
}
batch = append(batch, feedback)
// Update the progress
p.Export.TotalItems += 1
remainingTime := startTime.Sub(time.Unix(feedback.Created, 0))
p.Export.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
// Allow the importer to start processing the listens by
// sending them in batches.
if len(batch) >= lovesBatchSize {
// The dump does not contain track metadata. Extend it with additional
// lookups
loves, err := lbapi.ExtendTrackMetadata(ctx, &b.lbClient, b.mbClient, &batch)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
results <- models.LovesResult{Items: loves}
progress <- p
batch = batch[:0]
}
}
loves, err := lbapi.ExtendTrackMetadata(ctx, &b.lbClient, b.mbClient, &batch)
if err != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
results <- models.LovesResult{Items: loves}
p.Export.Complete()
progress <- p
}

View file

@ -0,0 +1,40 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
package lbarchive_test
import (
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"go.uploadedlobster.com/scotty/internal/backends/lbarchive"
"go.uploadedlobster.com/scotty/internal/config"
)
func TestInitConfig(t *testing.T) {
c := viper.New()
c.Set("file-path", "/foo/lbarchive.zip")
service := config.NewServiceConfig("test", c)
backend := lbarchive.ListenBrainzArchiveBackend{}
err := backend.InitConfig(&service)
assert.NoError(t, err)
}

View file

@ -0,0 +1,190 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
package listenbrainz
import (
"context"
"time"
"go.uploadedlobster.com/mbtypes"
"go.uploadedlobster.com/musicbrainzws2"
"go.uploadedlobster.com/scotty/internal/listenbrainz"
"go.uploadedlobster.com/scotty/internal/models"
)
func AsListen(lbListen listenbrainz.Listen) models.Listen {
listen := models.Listen{
ListenedAt: time.Unix(lbListen.ListenedAt, 0),
UserName: lbListen.UserName,
Track: AsTrack(lbListen.TrackMetadata),
}
return listen
}
func AsLove(f listenbrainz.Feedback) models.Love {
recordingMBID := f.RecordingMBID
track := f.TrackMetadata
if track == nil {
track = &listenbrainz.Track{}
}
love := models.Love{
UserName: f.UserName,
RecordingMBID: recordingMBID,
Created: time.Unix(f.Created, 0),
Track: AsTrack(*track),
}
if love.Track.RecordingMBID == "" {
love.Track.RecordingMBID = love.RecordingMBID
}
return love
}
func AsTrack(t listenbrainz.Track) models.Track {
track := models.Track{
TrackName: t.TrackName,
ReleaseName: t.ReleaseName,
ArtistNames: []string{t.ArtistName},
Duration: t.Duration(),
TrackNumber: t.TrackNumber(),
DiscNumber: t.DiscNumber(),
RecordingMBID: t.RecordingMBID(),
ReleaseMBID: t.ReleaseMBID(),
ReleaseGroupMBID: t.ReleaseGroupMBID(),
ISRC: t.ISRC(),
AdditionalInfo: t.AdditionalInfo,
}
if t.MBIDMapping != nil && len(track.ArtistMBIDs) == 0 {
for _, artistMBID := range t.MBIDMapping.ArtistMBIDs {
track.ArtistMBIDs = append(track.ArtistMBIDs, artistMBID)
}
}
return track
}
func LookupRecording(
ctx context.Context,
mb *musicbrainzws2.Client,
mbid mbtypes.MBID,
) (*listenbrainz.Track, error) {
filter := musicbrainzws2.IncludesFilter{
Includes: []string{"artist-credits"},
}
recording, err := mb.LookupRecording(ctx, mbid, filter)
if err != nil {
return nil, err
}
artistMBIDs := make([]mbtypes.MBID, 0, len(recording.ArtistCredit))
for _, artist := range recording.ArtistCredit {
artistMBIDs = append(artistMBIDs, artist.Artist.ID)
}
track := listenbrainz.Track{
TrackName: recording.Title,
ArtistName: recording.ArtistCredit.String(),
MBIDMapping: &listenbrainz.MBIDMapping{
// In case of redirects this MBID differs from the looked up MBID
RecordingMBID: recording.ID,
ArtistMBIDs: artistMBIDs,
},
}
return &track, nil
}
func ExtendTrackMetadata(
ctx context.Context,
lb *listenbrainz.Client,
mb *musicbrainzws2.Client,
feedbacks *[]listenbrainz.Feedback,
) ([]models.Love, error) {
mbids := make([]mbtypes.MBID, 0, len(*feedbacks))
for _, feedback := range *feedbacks {
if feedback.TrackMetadata == nil && feedback.RecordingMBID != "" {
mbids = append(mbids, feedback.RecordingMBID)
}
}
result, err := lb.MetadataRecordings(ctx, mbids)
if err != nil {
return nil, err
}
loves := make([]models.Love, 0, len(*feedbacks))
for _, feedback := range *feedbacks {
if feedback.TrackMetadata == nil && feedback.RecordingMBID != "" {
metadata, ok := result[feedback.RecordingMBID]
if ok {
feedback.TrackMetadata = trackFromMetadataLookup(
feedback.RecordingMBID, metadata)
} else {
// MBID not in result. This is probably a MBID redirect, get
// data from MB instead (slower).
// If this also fails, just leave the metadata empty.
track, err := LookupRecording(ctx, mb, feedback.RecordingMBID)
if err == nil {
feedback.TrackMetadata = track
}
}
}
loves = append(loves, AsLove(feedback))
}
return loves, nil
}
func trackFromMetadataLookup(
recordingMBID mbtypes.MBID,
metadata listenbrainz.RecordingMetadata,
) *listenbrainz.Track {
artistMBIDs := make([]mbtypes.MBID, 0, len(metadata.Artist.Artists))
artists := make([]listenbrainz.Artist, 0, len(metadata.Artist.Artists))
for _, artist := range metadata.Artist.Artists {
artistMBIDs = append(artistMBIDs, artist.ArtistMBID)
artists = append(artists, listenbrainz.Artist{
ArtistCreditName: artist.Name,
ArtistMBID: artist.ArtistMBID,
JoinPhrase: artist.JoinPhrase,
})
}
return &listenbrainz.Track{
TrackName: metadata.Recording.Name,
ArtistName: metadata.Artist.Name,
ReleaseName: metadata.Release.Name,
AdditionalInfo: map[string]any{
"duration_ms": metadata.Recording.Length,
"release_group_mbid": metadata.Release.ReleaseGroupMBID,
},
MBIDMapping: &listenbrainz.MBIDMapping{
RecordingMBID: recordingMBID,
ReleaseMBID: metadata.Release.MBID,
ArtistMBIDs: artistMBIDs,
Artists: artists,
CAAID: metadata.Release.CAAID,
CAAReleaseMBID: metadata.Release.CAAReleaseMBID,
},
}
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -17,6 +17,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package listenbrainz
import (
"context"
"fmt"
"sort"
"time"
@ -25,19 +26,28 @@ import (
"go.uploadedlobster.com/musicbrainzws2"
"go.uploadedlobster.com/scotty/internal/config"
"go.uploadedlobster.com/scotty/internal/i18n"
"go.uploadedlobster.com/scotty/internal/listenbrainz"
"go.uploadedlobster.com/scotty/internal/models"
"go.uploadedlobster.com/scotty/internal/similarity"
"go.uploadedlobster.com/scotty/internal/version"
)
const lovesBatchSize = listenbrainz.MaxItemsPerGet
type ListenBrainzApiBackend struct {
client Client
mbClient musicbrainzws2.Client
client listenbrainz.Client
mbClient *musicbrainzws2.Client
username string
checkDuplicates bool
existingMBIDs map[mbtypes.MBID]bool
}
func (b *ListenBrainzApiBackend) Close() {
if b.mbClient != nil {
b.mbClient.Close()
}
}
func (b *ListenBrainzApiBackend) Name() string { return "listenbrainz" }
func (b *ListenBrainzApiBackend) Options() []models.BackendOption {
@ -57,38 +67,42 @@ func (b *ListenBrainzApiBackend) Options() []models.BackendOption {
}
func (b *ListenBrainzApiBackend) InitConfig(config *config.ServiceConfig) error {
b.client = NewClient(config.GetString("token"))
b.mbClient = *musicbrainzws2.NewClient(musicbrainzws2.AppInfo{
b.client = listenbrainz.NewClient(config.GetString("token"), version.UserAgent())
b.mbClient = musicbrainzws2.NewClient(musicbrainzws2.AppInfo{
Name: version.AppName,
Version: version.AppVersion,
URL: version.AppURL,
})
b.client.MaxResults = MaxItemsPerGet
b.client.MaxResults = listenbrainz.MaxItemsPerGet
b.username = config.GetString("username")
b.checkDuplicates = config.GetBool("check-duplicate-listens", false)
return nil
}
func (b *ListenBrainzApiBackend) StartImport() error { return nil }
func (b *ListenBrainzApiBackend) FinishImport() error { return nil }
func (b *ListenBrainzApiBackend) FinishImport(result *models.ImportResult) error {
return nil
}
func (b *ListenBrainzApiBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
func (b *ListenBrainzApiBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
startTime := time.Now()
minTime := oldestTimestamp
if minTime.Unix() < 1 {
minTime = time.Unix(1, 0)
}
totalDuration := startTime.Sub(minTime)
defer close(results)
p := models.Progress{Total: int64(totalDuration.Seconds())}
totalDuration := startTime.Sub(oldestTimestamp)
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(totalDuration.Seconds()),
},
}
for {
result, err := b.client.GetListens(b.username, time.Now(), minTime)
result, err := b.client.GetListens(ctx, b.username, time.Now(), minTime)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
@ -98,7 +112,7 @@ func (b *ListenBrainzApiBackend) ExportListens(oldestTimestamp time.Time, result
if minTime.Unix() < result.Payload.OldestListenTimestamp {
minTime = time.Unix(result.Payload.OldestListenTimestamp, 0)
totalDuration = startTime.Sub(minTime)
p.Total = int64(totalDuration.Seconds())
p.Export.Total = int64(totalDuration.Seconds())
continue
} else {
break
@ -113,7 +127,7 @@ func (b *ListenBrainzApiBackend) ExportListens(oldestTimestamp time.Time, result
for _, listen := range result.Payload.Listens {
if listen.ListenedAt > oldestTimestamp.Unix() {
listens = append(listens, listen.AsListen())
listens = append(listens, AsListen(listen))
} else {
// result contains listens older then oldestTimestamp
break
@ -121,34 +135,36 @@ func (b *ListenBrainzApiBackend) ExportListens(oldestTimestamp time.Time, result
}
sort.Sort(listens)
p.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
p.Export.TotalItems += len(listens)
p.Export.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
progress <- p
results <- models.ListensResult{Items: listens, OldestTimestamp: minTime}
}
results <- models.ListensResult{OldestTimestamp: minTime}
progress <- p.Complete()
p.Export.Complete()
progress <- p
}
func (b *ListenBrainzApiBackend) ImportListens(export models.ListensResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *ListenBrainzApiBackend) ImportListens(ctx context.Context, export models.ListensResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
total := len(export.Items)
p := models.Progress{}.FromImportResult(importResult)
for i := 0; i < total; i += MaxListensPerRequest {
listens := export.Items[i:min(i+MaxListensPerRequest, total)]
p := models.TransferProgress{}.FromImportResult(importResult, false)
for i := 0; i < total; i += listenbrainz.MaxListensPerRequest {
listens := export.Items[i:min(i+listenbrainz.MaxListensPerRequest, total)]
count := len(listens)
if count == 0 {
break
}
submission := ListenSubmission{
ListenType: Import,
Payload: make([]Listen, 0, count),
submission := listenbrainz.ListenSubmission{
ListenType: listenbrainz.Import,
Payload: make([]listenbrainz.Listen, 0, count),
}
for _, l := range listens {
if b.checkDuplicates {
isDupe, err := b.checkDuplicateListen(l)
p.Elapsed += 1
isDupe, err := b.checkDuplicateListen(ctx, l)
p.Import.Elapsed += 1
progress <- p
if err != nil {
return importResult, err
@ -157,14 +173,15 @@ func (b *ListenBrainzApiBackend) ImportListens(export models.ListensResult, impo
msg := i18n.Tr("Ignored duplicate listen %v: \"%v\" by %v (%v)",
l.ListenedAt, l.TrackName, l.ArtistName(), l.RecordingMBID)
importResult.Log(models.Info, msg)
importResult.UpdateTimestamp(l.ListenedAt)
continue
}
}
l.FillAdditionalInfo()
listen := Listen{
listen := listenbrainz.Listen{
ListenedAt: l.ListenedAt.Unix(),
TrackMetadata: Track{
TrackMetadata: listenbrainz.Track{
TrackName: l.TrackName,
ReleaseName: l.ReleaseName,
ArtistName: l.ArtistName(),
@ -178,7 +195,7 @@ func (b *ListenBrainzApiBackend) ImportListens(export models.ListensResult, impo
}
if len(submission.Payload) > 0 {
_, err := b.client.SubmitListens(submission)
_, err := b.client.SubmitListens(ctx, submission)
if err != nil {
return importResult, err
}
@ -188,41 +205,47 @@ func (b *ListenBrainzApiBackend) ImportListens(export models.ListensResult, impo
importResult.UpdateTimestamp(listens[count-1].ListenedAt)
}
importResult.ImportCount += count
progress <- p.FromImportResult(importResult)
progress <- p.FromImportResult(importResult, false)
}
return importResult, nil
}
func (b *ListenBrainzApiBackend) ExportLoves(oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.Progress) {
defer close(results)
func (b *ListenBrainzApiBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
exportChan := make(chan models.LovesResult)
p := models.Progress{}
go b.exportLoves(oldestTimestamp, exportChan)
for existingLoves := range exportChan {
if existingLoves.Error != nil {
progress <- p.Complete()
results <- models.LovesResult{Error: existingLoves.Error}
p := models.TransferProgress{
Export: &models.Progress{},
}
p.Total = int64(existingLoves.Total)
p.Elapsed += int64(existingLoves.Items.Len())
go b.exportLoves(ctx, oldestTimestamp, exportChan)
for existingLoves := range exportChan {
if existingLoves.Error != nil {
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: existingLoves.Error}
return
}
p.Export.TotalItems = existingLoves.Total
p.Export.Total = int64(existingLoves.Total)
p.Export.Elapsed += int64(len(existingLoves.Items))
progress <- p
results <- existingLoves
}
progress <- p.Complete()
p.Export.Complete()
progress <- p
}
func (b *ListenBrainzApiBackend) exportLoves(oldestTimestamp time.Time, results chan models.LovesResult) {
func (b *ListenBrainzApiBackend) exportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult) {
offset := 0
defer close(results)
loves := make(models.LovesList, 0, 2*MaxItemsPerGet)
allLoves := make(models.LovesList, 0, 2*listenbrainz.MaxItemsPerGet)
batch := make([]listenbrainz.Feedback, 0, lovesBatchSize)
out:
for {
result, err := b.client.GetFeedback(b.username, 1, offset)
result, err := b.client.GetFeedback(ctx, b.username, 1, offset)
if err != nil {
results <- models.LovesResult{Error: err}
return
@ -234,41 +257,55 @@ out:
}
for _, feedback := range result.Feedback {
// Missing track metadata indicates that the recording MBID is no
// longer available and might have been merged. Try fetching details
// from MusicBrainz.
if feedback.TrackMetadata == nil {
track, err := b.lookupRecording(feedback.RecordingMBID)
if err == nil {
feedback.TrackMetadata = track
}
}
love := feedback.AsLove()
if love.Created.After(oldestTimestamp) {
loves = append(loves, love)
if time.Unix(feedback.Created, 0).After(oldestTimestamp) {
batch = append(batch, feedback)
} else {
break out
}
if len(batch) >= lovesBatchSize {
// Missing track metadata indicates that the recording MBID is no
// longer available and might have been merged. Try fetching details
// from MusicBrainz.
lovesBatch, err := ExtendTrackMetadata(ctx, &b.client, b.mbClient, &batch)
if err != nil {
results <- models.LovesResult{Error: err}
return
}
offset += MaxItemsPerGet
for _, l := range lovesBatch {
allLoves = append(allLoves, l)
}
}
}
sort.Sort(loves)
offset += listenbrainz.MaxItemsPerGet
}
lovesBatch, err := ExtendTrackMetadata(ctx, &b.client, b.mbClient, &batch)
if err != nil {
results <- models.LovesResult{Error: err}
return
}
for _, l := range lovesBatch {
allLoves = append(allLoves, l)
}
sort.Sort(allLoves)
results <- models.LovesResult{
Total: len(loves),
Items: loves,
Total: len(allLoves),
Items: allLoves,
}
}
func (b *ListenBrainzApiBackend) ImportLoves(export models.LovesResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *ListenBrainzApiBackend) ImportLoves(ctx context.Context, export models.LovesResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
if len(b.existingMBIDs) == 0 {
existingLovesChan := make(chan models.LovesResult)
go b.exportLoves(time.Unix(0, 0), existingLovesChan)
go b.exportLoves(ctx, time.Unix(0, 0), existingLovesChan)
// TODO: Store MBIDs directly
b.existingMBIDs = make(map[mbtypes.MBID]bool, MaxItemsPerGet)
b.existingMBIDs = make(map[mbtypes.MBID]bool, listenbrainz.MaxItemsPerGet)
for existingLoves := range existingLovesChan {
if existingLoves.Error != nil {
@ -294,7 +331,7 @@ func (b *ListenBrainzApiBackend) ImportLoves(export models.LovesResult, importRe
}
if recordingMBID == "" {
lookup, err := b.client.Lookup(love.TrackName, love.ArtistName())
lookup, err := b.client.Lookup(ctx, love.TrackName, love.ArtistName())
if err == nil {
recordingMBID = lookup.RecordingMBID
}
@ -306,7 +343,7 @@ func (b *ListenBrainzApiBackend) ImportLoves(export models.LovesResult, importRe
if b.existingMBIDs[recordingMBID] {
ok = true
} else {
resp, err := b.client.SendFeedback(Feedback{
resp, err := b.client.SendFeedback(ctx, listenbrainz.Feedback{
RecordingMBID: recordingMBID,
Score: 1,
})
@ -332,7 +369,7 @@ func (b *ListenBrainzApiBackend) ImportLoves(export models.LovesResult, importRe
importResult.Log(models.Error, msg)
}
progress <- models.Progress{}.FromImportResult(importResult)
progress <- models.TransferProgress{}.FromImportResult(importResult, false)
}
return importResult, nil
@ -342,7 +379,7 @@ var defaultDuration = time.Duration(3 * time.Minute)
const trackSimilarityThreshold = 0.9
func (b *ListenBrainzApiBackend) checkDuplicateListen(listen models.Listen) (bool, error) {
func (b *ListenBrainzApiBackend) checkDuplicateListen(ctx context.Context, listen models.Listen) (bool, error) {
// Find listens
duration := listen.Duration
if duration == 0 {
@ -350,13 +387,13 @@ func (b *ListenBrainzApiBackend) checkDuplicateListen(listen models.Listen) (boo
}
minTime := listen.ListenedAt.Add(-duration)
maxTime := listen.ListenedAt.Add(duration)
candidates, err := b.client.GetListens(b.username, maxTime, minTime)
candidates, err := b.client.GetListens(ctx, b.username, maxTime, minTime)
if err != nil {
return false, err
}
for _, c := range candidates.Payload.Listens {
sim := similarity.CompareTracks(listen.Track, c.TrackMetadata.AsTrack())
sim := similarity.CompareTracks(listen.Track, AsTrack(c.TrackMetadata))
if sim >= trackSimilarityThreshold {
return true, nil
}
@ -364,81 +401,3 @@ func (b *ListenBrainzApiBackend) checkDuplicateListen(listen models.Listen) (boo
return false, nil
}
func (b *ListenBrainzApiBackend) lookupRecording(mbid mbtypes.MBID) (*Track, error) {
filter := musicbrainzws2.IncludesFilter{
Includes: []string{"artist-credits"},
}
recording, err := b.mbClient.LookupRecording(mbid, filter)
if err != nil {
return nil, err
}
artistMBIDs := make([]mbtypes.MBID, 0, len(recording.ArtistCredit))
for _, artist := range recording.ArtistCredit {
artistMBIDs = append(artistMBIDs, artist.Artist.ID)
}
track := Track{
TrackName: recording.Title,
ArtistName: recording.ArtistCredit.String(),
MBIDMapping: &MBIDMapping{
// In case of redirects this MBID differs from the looked up MBID
RecordingMBID: recording.ID,
ArtistMBIDs: artistMBIDs,
},
}
return &track, nil
}
func (lbListen Listen) AsListen() models.Listen {
listen := models.Listen{
ListenedAt: time.Unix(lbListen.ListenedAt, 0),
UserName: lbListen.UserName,
Track: lbListen.TrackMetadata.AsTrack(),
}
return listen
}
func (f Feedback) AsLove() models.Love {
recordingMBID := f.RecordingMBID
track := f.TrackMetadata
if track == nil {
track = &Track{}
}
love := models.Love{
UserName: f.UserName,
RecordingMBID: recordingMBID,
Created: time.Unix(f.Created, 0),
Track: track.AsTrack(),
}
if love.Track.RecordingMBID == "" {
love.Track.RecordingMBID = love.RecordingMBID
}
return love
}
func (t Track) AsTrack() models.Track {
track := models.Track{
TrackName: t.TrackName,
ReleaseName: t.ReleaseName,
ArtistNames: []string{t.ArtistName},
Duration: t.Duration(),
TrackNumber: t.TrackNumber(),
DiscNumber: t.DiscNumber(),
RecordingMBID: t.RecordingMBID(),
ReleaseMBID: t.ReleaseMBID(),
ReleaseGroupMBID: t.ReleaseGroupMBID(),
ISRC: t.ISRC(),
AdditionalInfo: t.AdditionalInfo,
}
if t.MBIDMapping != nil && len(track.ArtistMBIDs) == 0 {
for _, artistMBID := range t.MBIDMapping.ArtistMBIDs {
track.ArtistMBIDs = append(track.ArtistMBIDs, artistMBID)
}
}
return track
}

View file

@ -24,15 +24,16 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uploadedlobster.com/mbtypes"
"go.uploadedlobster.com/scotty/internal/backends/listenbrainz"
lbapi "go.uploadedlobster.com/scotty/internal/backends/listenbrainz"
"go.uploadedlobster.com/scotty/internal/config"
"go.uploadedlobster.com/scotty/internal/listenbrainz"
)
func TestInitConfig(t *testing.T) {
c := viper.New()
c.Set("token", "thetoken")
service := config.NewServiceConfig("test", c)
backend := listenbrainz.ListenBrainzApiBackend{}
backend := lbapi.ListenBrainzApiBackend{}
err := backend.InitConfig(&service)
assert.NoError(t, err)
}
@ -57,7 +58,7 @@ func TestListenBrainzListenAsListen(t *testing.T) {
},
},
}
listen := lbListen.AsListen()
listen := lbapi.AsListen(lbListen)
assert.Equal(t, time.Unix(1699289873, 0), listen.ListenedAt)
assert.Equal(t, lbListen.UserName, listen.UserName)
assert.Equal(t, time.Duration(413787*time.Millisecond), listen.Duration)
@ -93,7 +94,7 @@ func TestListenBrainzFeedbackAsLove(t *testing.T) {
},
},
}
love := feedback.AsLove()
love := lbapi.AsLove(feedback)
assert := assert.New(t)
assert.Equal(time.Unix(1699859066, 0).Unix(), love.Created.Unix())
assert.Equal(feedback.UserName, love.UserName)
@ -114,7 +115,7 @@ func TestListenBrainzPartialFeedbackAsLove(t *testing.T) {
RecordingMBID: recordingMBID,
Score: 1,
}
love := feedback.AsLove()
love := lbapi.AsLove(feedback)
assert := assert.New(t)
assert.Equal(time.Unix(1699859066, 0).Unix(), love.Created.Unix())
assert.Equal(recordingMBID, love.RecordingMBID)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,6 +22,7 @@ THE SOFTWARE.
package maloja
import (
"context"
"errors"
"strconv"
@ -48,9 +49,10 @@ func NewClient(serverURL string, token string) Client {
}
}
func (c Client) GetScrobbles(page int, perPage int) (result GetScrobblesResult, err error) {
func (c Client) GetScrobbles(ctx context.Context, page int, perPage int) (result GetScrobblesResult, err error) {
const path = "/apis/mlj_1/scrobbles"
response, err := c.HTTPClient.R().
SetContext(ctx).
SetQueryParams(map[string]string{
"page": strconv.Itoa(page),
"perpage": strconv.Itoa(perPage),
@ -65,10 +67,11 @@ func (c Client) GetScrobbles(page int, perPage int) (result GetScrobblesResult,
return
}
func (c Client) NewScrobble(scrobble NewScrobble) (result NewScrobbleResult, err error) {
func (c Client) NewScrobble(ctx context.Context, scrobble NewScrobble) (result NewScrobbleResult, err error) {
const path = "/apis/mlj_1/newscrobble"
scrobble.Key = c.token
response, err := c.HTTPClient.R().
SetContext(ctx).
SetBody(scrobble).
SetResult(&result).
Post(path)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,6 +22,7 @@ THE SOFTWARE.
package maloja_test
import (
"context"
"net/http"
"testing"
@ -48,7 +49,8 @@ func TestGetScrobbles(t *testing.T) {
"https://maloja.example.com/apis/mlj_1/scrobbles",
"testdata/scrobbles.json")
result, err := client.GetScrobbles(0, 2)
ctx := context.Background()
result, err := client.GetScrobbles(ctx, 0, 2)
require.NoError(t, err)
assert := assert.New(t)
@ -69,12 +71,13 @@ func TestNewScrobble(t *testing.T) {
url := server + "/apis/mlj_1/newscrobble"
httpmock.RegisterResponder("POST", url, responder)
ctx := context.Background()
scrobble := maloja.NewScrobble{
Title: "Oweynagat",
Artist: "Dool",
Time: 1699574369,
}
result, err := client.NewScrobble(scrobble)
result, err := client.NewScrobble(ctx, scrobble)
require.NoError(t, err)
assert.Equal(t, "success", result.Status)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -17,6 +17,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package maloja
import (
"context"
"errors"
"sort"
"strings"
@ -34,6 +35,8 @@ type MalojaApiBackend struct {
func (b *MalojaApiBackend) Name() string { return "maloja" }
func (b *MalojaApiBackend) Close() {}
func (b *MalojaApiBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "server-url",
@ -61,23 +64,28 @@ func (b *MalojaApiBackend) InitConfig(config *config.ServiceConfig) error {
}
func (b *MalojaApiBackend) StartImport() error { return nil }
func (b *MalojaApiBackend) FinishImport() error { return nil }
func (b *MalojaApiBackend) FinishImport(result *models.ImportResult) error {
return nil
}
func (b *MalojaApiBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
func (b *MalojaApiBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
page := 0
perPage := MaxItemsPerGet
defer close(results)
// We need to gather the full list of listens in order to sort them
listens := make(models.ListensList, 0, 2*perPage)
p := models.Progress{Total: int64(perPage)}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(perPage),
},
}
out:
for {
result, err := b.client.GetScrobbles(page, perPage)
result, err := b.client.GetScrobbles(ctx, page, perPage)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
@ -89,24 +97,27 @@ out:
for _, scrobble := range result.List {
if scrobble.ListenedAt > oldestTimestamp.Unix() {
p.Elapsed += 1
p.Export.Elapsed += 1
listens = append(listens, scrobble.AsListen())
} else {
break out
}
}
p.Total += int64(perPage)
p.Export.TotalItems = len(listens)
p.Export.Total += int64(perPage)
progress <- p
page += 1
}
sort.Sort(listens)
progress <- p.Complete()
p.Export.Complete()
progress <- p
results <- models.ListensResult{Items: listens}
}
func (b *MalojaApiBackend) ImportListens(export models.ListensResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *MalojaApiBackend) ImportListens(ctx context.Context, export models.ListensResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
p := models.TransferProgress{}.FromImportResult(importResult, false)
for _, listen := range export.Items {
scrobble := NewScrobble{
Title: listen.TrackName,
@ -118,7 +129,7 @@ func (b *MalojaApiBackend) ImportListens(export models.ListensResult, importResu
Nofix: b.nofix,
}
resp, err := b.client.NewScrobble(scrobble)
resp, err := b.client.NewScrobble(ctx, scrobble)
if err != nil {
return importResult, err
} else if resp.Status != "success" {
@ -127,7 +138,7 @@ func (b *MalojaApiBackend) ImportListens(export models.ListensResult, importResu
importResult.UpdateTimestamp(listen.ListenedAt)
importResult.ImportCount += 1
progress <- models.Progress{}.FromImportResult(importResult)
progress <- p.FromImportResult(importResult, false)
}
return importResult, nil

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -17,7 +17,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package scrobblerlog
import (
"bufio"
"context"
"fmt"
"os"
"sort"
@ -41,6 +41,8 @@ type ScrobblerLogBackend struct {
func (b *ScrobblerLogBackend) Name() string { return "scrobbler-log" }
func (b *ScrobblerLogBackend) Close() {}
func (b *ScrobblerLogBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "file-path",
@ -67,18 +69,19 @@ func (b *ScrobblerLogBackend) InitConfig(config *config.ServiceConfig) error {
b.filePath = config.GetString("file-path")
b.ignoreSkipped = config.GetBool("ignore-skipped", true)
b.append = config.GetBool("append", true)
timezone := config.GetString("time-zone")
if timezone != "" {
b.log = scrobblerlog.ScrobblerLog{
TZ: scrobblerlog.TimezoneUTC,
Client: "Rockbox unknown $Revision$",
}
if timezone := config.GetString("time-zone"); timezone != "" {
location, err := time.LoadLocation(timezone)
if err != nil {
return fmt.Errorf("Invalid time-zone %q: %w", timezone, err)
}
b.log.FallbackTimezone = location
}
b.log = scrobblerlog.ScrobblerLog{
TZ: scrobblerlog.TimezoneUTC,
Client: "Rockbox unknown $Revision$",
}
return nil
}
@ -104,8 +107,7 @@ func (b *ScrobblerLogBackend) StartImport() error {
b.append = false
} else {
// Verify existing file is a scrobbler log
reader := bufio.NewReader(file)
if err = b.log.ReadHeader(reader); err != nil {
if err = b.log.ReadHeader(file); err != nil {
file.Close()
return err
}
@ -126,15 +128,18 @@ func (b *ScrobblerLogBackend) StartImport() error {
return nil
}
func (b *ScrobblerLogBackend) FinishImport() error {
func (b *ScrobblerLogBackend) FinishImport(result *models.ImportResult) error {
return b.file.Close()
}
func (b *ScrobblerLogBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
defer close(results)
func (b *ScrobblerLogBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
file, err := os.Open(b.filePath)
p := models.TransferProgress{
Export: &models.Progress{},
}
if err != nil {
progress <- models.Progress{}.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
@ -143,24 +148,31 @@ func (b *ScrobblerLogBackend) ExportListens(oldestTimestamp time.Time, results c
err = b.log.Parse(file, b.ignoreSkipped)
if err != nil {
progress <- models.Progress{}.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
listens := make(models.ListensList, 0, len(b.log.Records))
client := strings.Split(b.log.Client, " ")[0]
for _, record := range b.log.Records {
p.Export.Total = int64(len(b.log.Records))
for _, record := range models.IterExportProgress(b.log.Records, &p, progress) {
listen := recordToListen(record, client)
if listen.ListenedAt.After(oldestTimestamp) {
listens = append(listens, recordToListen(record, client))
p.Export.TotalItems += 1
}
sort.Sort(listens.NewerThan(oldestTimestamp))
progress <- models.Progress{Total: int64(len(listens))}.Complete()
}
sort.Sort(listens)
results <- models.ListensResult{Items: listens}
}
func (b *ScrobblerLogBackend) ImportListens(export models.ListensResult, importResult models.ImportResult, progress chan models.Progress) (models.ImportResult, error) {
func (b *ScrobblerLogBackend) ImportListens(ctx context.Context, export models.ListensResult, importResult models.ImportResult, progress chan models.TransferProgress) (models.ImportResult, error) {
p := models.TransferProgress{}.FromImportResult(importResult, false)
records := make([]scrobblerlog.Record, len(export.Items))
for i, listen := range export.Items {
for i, listen := range models.IterImportProgress(export.Items, &p, progress) {
records[i] = listenToRecord(listen)
}
lastTimestamp, err := b.log.Append(b.file, records)
@ -170,8 +182,6 @@ func (b *ScrobblerLogBackend) ImportListens(export models.ListensResult, importR
importResult.UpdateTimestamp(lastTimestamp)
importResult.ImportCount += len(export.Items)
progress <- models.Progress{}.FromImportResult(importResult)
return importResult, nil
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -59,17 +59,18 @@ func NewClient(token oauth2.TokenSource) Client {
}
}
func (c Client) RecentlyPlayedAfter(after time.Time, limit int) (RecentlyPlayedResult, error) {
return c.recentlyPlayed(&after, nil, limit)
func (c Client) RecentlyPlayedAfter(ctx context.Context, after time.Time, limit int) (RecentlyPlayedResult, error) {
return c.recentlyPlayed(ctx, &after, nil, limit)
}
func (c Client) RecentlyPlayedBefore(before time.Time, limit int) (RecentlyPlayedResult, error) {
return c.recentlyPlayed(nil, &before, limit)
func (c Client) RecentlyPlayedBefore(ctx context.Context, before time.Time, limit int) (RecentlyPlayedResult, error) {
return c.recentlyPlayed(ctx, nil, &before, limit)
}
func (c Client) recentlyPlayed(after *time.Time, before *time.Time, limit int) (result RecentlyPlayedResult, err error) {
func (c Client) recentlyPlayed(ctx context.Context, after *time.Time, before *time.Time, limit int) (result RecentlyPlayedResult, err error) {
const path = "/me/player/recently-played"
request := c.HTTPClient.R().
SetContext(ctx).
SetQueryParam("limit", strconv.Itoa(limit)).
SetResult(&result)
if after != nil {
@ -85,9 +86,10 @@ func (c Client) recentlyPlayed(after *time.Time, before *time.Time, limit int) (
return
}
func (c Client) UserTracks(offset int, limit int) (result TracksResult, err error) {
func (c Client) UserTracks(ctx context.Context, offset int, limit int) (result TracksResult, err error) {
const path = "/me/tracks"
response, err := c.HTTPClient.R().
SetContext(ctx).
SetQueryParams(map[string]string{
"offset": strconv.Itoa(offset),
"limit": strconv.Itoa(limit),

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,6 +22,7 @@ THE SOFTWARE.
package spotify_test
import (
"context"
"net/http"
"testing"
"time"
@ -47,7 +48,8 @@ func TestRecentlyPlayedAfter(t *testing.T) {
"https://api.spotify.com/v1/me/player/recently-played",
"testdata/recently-played.json")
result, err := client.RecentlyPlayedAfter(time.Now(), 3)
ctx := context.Background()
result, err := client.RecentlyPlayedAfter(ctx, time.Now(), 3)
require.NoError(t, err)
assert := assert.New(t)
@ -67,7 +69,8 @@ func TestGetUserTracks(t *testing.T) {
"https://api.spotify.com/v1/me/tracks",
"testdata/user-tracks.json")
result, err := client.UserTracks(0, 2)
ctx := context.Background()
result, err := client.UserTracks(ctx, 0, 2)
require.NoError(t, err)
assert := assert.New(t)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -18,6 +18,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package spotify
import (
"context"
"math"
"net/url"
"sort"
@ -40,6 +41,8 @@ type SpotifyApiBackend struct {
func (b *SpotifyApiBackend) Name() string { return "spotify" }
func (b *SpotifyApiBackend) Close() {}
func (b *SpotifyApiBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "client-id",
@ -95,20 +98,22 @@ func (b *SpotifyApiBackend) OAuth2Setup(token oauth2.TokenSource) error {
return nil
}
func (b *SpotifyApiBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
func (b *SpotifyApiBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
startTime := time.Now()
minTime := oldestTimestamp
totalDuration := startTime.Sub(oldestTimestamp)
defer close(results)
p := models.Progress{Total: int64(totalDuration.Seconds())}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(totalDuration.Seconds()),
},
}
for {
result, err := b.client.RecentlyPlayedAfter(minTime, MaxItemsPerGet)
result, err := b.client.RecentlyPlayedAfter(ctx, minTime, MaxItemsPerGet)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
@ -120,7 +125,8 @@ func (b *SpotifyApiBackend) ExportListens(oldestTimestamp time.Time, results cha
// Set minTime to the newest returned listen
after, err := strconv.ParseInt(result.Cursors.After, 10, 64)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
} else if after <= minTime.Unix() {
@ -148,32 +154,37 @@ func (b *SpotifyApiBackend) ExportListens(oldestTimestamp time.Time, results cha
}
sort.Sort(listens)
p.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
p.Export.TotalItems += len(listens)
p.Export.Elapsed = int64(totalDuration.Seconds() - remainingTime.Seconds())
progress <- p
results <- models.ListensResult{Items: listens, OldestTimestamp: minTime}
}
results <- models.ListensResult{OldestTimestamp: minTime}
progress <- p.Complete()
p.Export.Complete()
progress <- p
}
func (b *SpotifyApiBackend) ExportLoves(oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.Progress) {
func (b *SpotifyApiBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
// Choose a high offset, we attempt to search the loves backwards starting
// at the oldest one.
offset := math.MaxInt32
perPage := MaxItemsPerGet
defer close(results)
p := models.Progress{Total: int64(perPage)}
p := models.TransferProgress{
Export: &models.Progress{
Total: int64(perPage),
},
}
totalCount := 0
exportCount := 0
out:
for {
result, err := b.client.UserTracks(offset, perPage)
result, err := b.client.UserTracks(ctx, offset, perPage)
if err != nil {
progress <- p.Complete()
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
@ -181,7 +192,7 @@ out:
// The offset was higher then the actual number of tracks. Adjust the offset
// and continue.
if offset >= result.Total {
p.Total = int64(result.Total)
p.Export.Total = int64(result.Total)
totalCount = result.Total
offset = max(result.Total-perPage, 0)
continue
@ -205,7 +216,7 @@ out:
exportCount += len(loves)
sort.Sort(loves)
results <- models.LovesResult{Items: loves, Total: totalCount}
p.Elapsed += int64(count)
p.Export.Elapsed += int64(count)
progress <- p
if offset <= 0 {
@ -220,7 +231,8 @@ out:
}
results <- models.LovesResult{Total: exportCount}
progress <- p.Complete()
p.Export.Complete()
progress <- p
}
func (l Listen) AsListen() models.Listen {
@ -248,7 +260,7 @@ func (t Track) AsTrack() models.Track {
TrackName: t.Name,
ReleaseName: t.Album.Name,
ArtistNames: make([]string, 0, len(t.Artists)),
Duration: time.Duration(t.DurationMs * int(time.Millisecond)),
Duration: time.Duration(t.DurationMs) * time.Millisecond,
TrackNumber: t.TrackNumber,
DiscNumber: t.DiscNumber,
ISRC: t.ExternalIDs.ISRC,

View file

@ -0,0 +1,82 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later version.
Scotty is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
Scotty. If not, see <https://www.gnu.org/licenses/>.
*/
package spotifyhistory
import (
"errors"
"sort"
"go.uploadedlobster.com/scotty/pkg/archive"
)
var historyFileGlobs = []string{
"Spotify Extended Streaming History/Streaming_History_Audio_*.json",
"Streaming_History_Audio_*.json",
}
// Access a Spotify history archive.
// This can be either the ZIP file as provided by Spotify
// or a directory where this was extracted to.
type HistoryArchive struct {
backend archive.ArchiveReader
}
// Open a Spotify history archive from file path.
func OpenHistoryArchive(path string) (*HistoryArchive, error) {
backend, err := archive.OpenArchive(path)
if err != nil {
return nil, err
}
return &HistoryArchive{backend: backend}, nil
}
func (h *HistoryArchive) GetHistoryFiles() ([]archive.FileInfo, error) {
for _, glob := range historyFileGlobs {
files, err := h.backend.Glob(glob)
if err != nil {
return nil, err
}
if len(files) > 0 {
sort.Slice(files, func(i, j int) bool {
return files[i].Name < files[j].Name
})
return files, nil
}
}
// Found no files, fail
return nil, errors.New("found no history files in archive")
}
func readHistoryFile(f archive.OpenableFile) (StreamingHistory, error) {
file, err := f.Open()
if err != nil {
return nil, err
}
defer file.Close()
history := StreamingHistory{}
err = history.Read(file)
if err != nil {
return nil, err
}
return history, nil
}

View file

@ -89,7 +89,7 @@ func (i HistoryItem) AsListen() models.Listen {
AdditionalInfo: models.AdditionalInfo{},
},
ListenedAt: i.Timestamp,
PlaybackDuration: time.Duration(i.MillisecondsPlayed * int(time.Millisecond)),
PlaybackDuration: time.Duration(i.MillisecondsPlayed) * time.Millisecond,
UserName: i.UserName,
}
if trackURL, err := formatSpotifyUri(i.SpotifyTrackUri); err != nil {

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -18,10 +18,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package spotifyhistory
import (
"os"
"path"
"path/filepath"
"slices"
"context"
"sort"
"time"
@ -30,10 +27,8 @@ import (
"go.uploadedlobster.com/scotty/internal/models"
)
const historyFileGlob = "Streaming_History_Audio_*.json"
type SpotifyHistoryBackend struct {
dirPath string
archivePath string
ignoreIncognito bool
ignoreSkipped bool
skippedMinSeconds int
@ -41,11 +36,15 @@ type SpotifyHistoryBackend struct {
func (b *SpotifyHistoryBackend) Name() string { return "spotify-history" }
func (b *SpotifyHistoryBackend) Close() {}
func (b *SpotifyHistoryBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "dir-path",
Label: i18n.Tr("Directory path"),
Name: "archive-path",
Label: i18n.Tr("Archive path"),
Type: models.String,
Default: "./my_spotify_data_extended.zip",
MigrateFrom: "dir-path",
}, {
Name: "ignore-incognito",
Label: i18n.Tr("Ignore listens in incognito mode"),
@ -65,33 +64,55 @@ func (b *SpotifyHistoryBackend) Options() []models.BackendOption {
}
func (b *SpotifyHistoryBackend) InitConfig(config *config.ServiceConfig) error {
b.dirPath = config.GetString("dir-path")
b.archivePath = config.GetString("archive-path")
// Backward compatibility
if b.archivePath == "" {
b.archivePath = config.GetString("dir-path")
}
b.ignoreIncognito = config.GetBool("ignore-incognito", true)
b.ignoreSkipped = config.GetBool("ignore-skipped", false)
b.skippedMinSeconds = config.GetInt("ignore-min-duration-seconds", 30)
return nil
}
func (b *SpotifyHistoryBackend) ExportListens(oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.Progress) {
defer close(results)
func (b *SpotifyHistoryBackend) ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan models.ListensResult, progress chan models.TransferProgress) {
p := models.TransferProgress{
Export: &models.Progress{},
}
files, err := filepath.Glob(path.Join(b.dirPath, historyFileGlob))
archive, err := OpenHistoryArchive(b.archivePath)
if err != nil {
progress <- models.Progress{}.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
slices.Sort(files)
fileCount := int64(len(files))
p := models.Progress{Total: fileCount}
for i, filePath := range files {
history, err := readHistoryFile(filePath)
files, err := archive.GetHistoryFiles()
if err != nil {
progress <- models.Progress{}.Complete()
p.Export.Abort()
progress <- p
results <- models.ListensResult{Error: err}
return
}
fileCount := int64(len(files))
p.Export.Total = fileCount
for i, f := range files {
if err := ctx.Err(); err != nil {
results <- models.ListensResult{Error: err}
p.Export.Abort()
progress <- p
return
}
history, err := readHistoryFile(f.File)
if err != nil {
results <- models.ListensResult{Error: err}
p.Export.Abort()
progress <- p
return
}
listens := history.AsListenList(ListenListOptions{
IgnoreIncognito: b.ignoreIncognito,
IgnoreSkipped: b.ignoreSkipped,
@ -99,25 +120,11 @@ func (b *SpotifyHistoryBackend) ExportListens(oldestTimestamp time.Time, results
})
sort.Sort(listens)
results <- models.ListensResult{Items: listens}
p.Elapsed = int64(i)
p.Export.Elapsed = int64(i)
p.Export.TotalItems += len(listens)
progress <- p
}
progress <- p.Complete()
}
func readHistoryFile(filePath string) (StreamingHistory, error) {
file, err := os.Open(filePath)
if err != nil {
return nil, err
}
defer file.Close()
history := StreamingHistory{}
err = history.Read(file)
if err != nil {
return nil, err
}
return history, nil
p.Export.Complete()
progress <- p
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -17,6 +17,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package subsonic
import (
"context"
"net/http"
"sort"
"time"
@ -36,6 +37,8 @@ type SubsonicApiBackend struct {
func (b *SubsonicApiBackend) Name() string { return "subsonic" }
func (b *SubsonicApiBackend) Close() {}
func (b *SubsonicApiBackend) Options() []models.BackendOption {
return []models.BackendOption{{
Name: "server-url",
@ -63,26 +66,30 @@ func (b *SubsonicApiBackend) InitConfig(config *config.ServiceConfig) error {
return nil
}
func (b *SubsonicApiBackend) ExportLoves(oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.Progress) {
defer close(results)
func (b *SubsonicApiBackend) ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan models.LovesResult, progress chan models.TransferProgress) {
err := b.client.Authenticate(b.password)
p := models.TransferProgress{
Export: &models.Progress{},
}
if err != nil {
progress <- models.Progress{}.Complete()
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
starred, err := b.client.GetStarred2(map[string]string{})
if err != nil {
progress <- models.Progress{}.Complete()
p.Export.Abort()
progress <- p
results <- models.LovesResult{Error: err}
return
}
loves := b.filterSongs(starred.Song, oldestTimestamp)
progress <- models.Progress{
Total: int64(loves.Len()),
}.Complete()
p.Export.Total = int64(len(loves))
p.Export.Complete()
progress <- p
results <- models.LovesResult{Items: loves}
}
@ -116,7 +123,7 @@ func SongAsLove(song subsonic.Child, username string) models.Love {
AdditionalInfo: map[string]any{
"subsonic_id": song.ID,
},
Duration: time.Duration(song.Duration * int(time.Second)),
Duration: time.Duration(song.Duration) * time.Second,
},
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -18,6 +18,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package cli
import (
"context"
"sync"
"time"
@ -28,24 +29,105 @@ import (
"go.uploadedlobster.com/scotty/internal/models"
)
func progressBar(wg *sync.WaitGroup, exportProgress chan models.Progress, importProgress chan models.Progress) *mpb.Progress {
p := mpb.New(
type progressBarUpdater struct {
wg *sync.WaitGroup
progress *mpb.Progress
exportBar *mpb.Bar
importBar *mpb.Bar
updateChan chan models.TransferProgress
lastExportUpdate time.Time
totalItems int
importedItems int
}
func setupProgressBars(ctx context.Context, updateChan chan models.TransferProgress) progressBarUpdater {
wg := &sync.WaitGroup{}
p := mpb.NewWithContext(
ctx,
mpb.WithWaitGroup(wg),
mpb.WithOutput(color.Output),
// mpb.WithWidth(64),
mpb.WithAutoRefresh(),
)
exportBar := setupProgressBar(p, i18n.Tr("exporting"))
importBar := setupProgressBar(p, i18n.Tr("importing"))
go updateProgressBar(exportBar, wg, exportProgress)
go updateProgressBar(importBar, wg, importProgress)
u := progressBarUpdater{
wg: wg,
progress: p,
exportBar: initExportProgressBar(p, i18n.Tr("exporting")),
importBar: initImportProgressBar(p, i18n.Tr("importing")),
updateChan: updateChan,
}
return p
go u.update()
return u
}
func setupProgressBar(p *mpb.Progress, name string) *mpb.Bar {
func (u *progressBarUpdater) close() {
close(u.updateChan)
u.progress.Wait()
}
func (u *progressBarUpdater) update() {
u.wg.Add(1)
defer u.wg.Done()
u.lastExportUpdate = time.Now()
for progress := range u.updateChan {
if progress.Export != nil {
u.updateExportProgress(progress.Export)
}
if progress.Import != nil {
if int64(u.totalItems) > progress.Import.Total {
progress.Import.Total = int64(u.totalItems)
}
u.updateImportProgress(progress.Import)
}
}
}
func (u *progressBarUpdater) updateExportProgress(progress *models.Progress) {
bar := u.exportBar
if progress.TotalItems != u.totalItems {
u.totalItems = progress.TotalItems
u.importBar.SetTotal(int64(u.totalItems), false)
}
if progress.Aborted {
bar.Abort(false)
return
}
oldIterTime := u.lastExportUpdate
u.lastExportUpdate = time.Now()
elapsedTime := u.lastExportUpdate.Sub(oldIterTime)
bar.EwmaSetCurrent(progress.Elapsed, elapsedTime)
bar.SetTotal(progress.Total, progress.Completed)
}
func (u *progressBarUpdater) updateImportProgress(progress *models.Progress) {
bar := u.importBar
if progress.Aborted {
bar.Abort(false)
return
}
bar.SetCurrent(progress.Elapsed)
bar.SetTotal(progress.Total, progress.Completed)
}
func initExportProgressBar(p *mpb.Progress, name string) *mpb.Bar {
return initProgressBar(p, name,
decor.EwmaETA(decor.ET_STYLE_GO, 0, decor.WC{C: decor.DSyncWidth}))
}
func initImportProgressBar(p *mpb.Progress, name string) *mpb.Bar {
return initProgressBar(p, name, decor.Counters(0, "%d / %d"))
}
func initProgressBar(p *mpb.Progress, name string, progressDecorator decor.Decorator) *mpb.Bar {
green := color.New(color.FgGreen).SprintFunc()
red := color.New(color.FgHiRed, color.Bold).SprintFunc()
return p.New(0,
mpb.BarStyle(),
mpb.PrependDecorators(
@ -58,23 +140,13 @@ func setupProgressBar(p *mpb.Progress, name string) *mpb.Bar {
),
mpb.AppendDecorators(
decor.OnComplete(
decor.EwmaETA(decor.ET_STYLE_GO, 0, decor.WC{C: decor.DSyncWidth}),
decor.OnAbort(
progressDecorator,
red(i18n.Tr("aborted")),
),
i18n.Tr("done"),
),
// decor.OnComplete(decor.Percentage(decor.WC{W: 5, C: decor.DSyncWidthR}), "done"),
decor.Name(" "),
),
)
}
func updateProgressBar(bar *mpb.Bar, wg *sync.WaitGroup, progressChan chan models.Progress) {
wg.Add(1)
defer wg.Done()
lastIterTime := time.Now()
for progress := range progressChan {
oldIterTime := lastIterTime
lastIterTime = time.Now()
bar.EwmaSetCurrent(progress.Elapsed, lastIterTime.Sub(oldIterTime))
bar.SetTotal(progress.Total, progress.Completed)
}
}

View file

@ -83,6 +83,12 @@ func PromptExtraOptions(config config.ServiceConfig) (config.ServiceConfig, erro
current, exists := config.ConfigValues[opt.Name]
if exists {
opt.Default = fmt.Sprintf("%v", current)
} else if opt.MigrateFrom != "" {
// If there is an old value to migrate from, try that
fallback, exists := config.ConfigValues[opt.MigrateFrom]
if exists {
opt.Default = fmt.Sprintf("%v", fallback)
}
}
val, err := Prompt(opt)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
@ -16,6 +16,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package cli
import (
"context"
"errors"
"fmt"
"strconv"
@ -109,44 +110,58 @@ func (c *TransferCmd[E, I, R]) Transfer(exp backends.ExportProcessor[R], imp bac
}
printTimestamp("From timestamp: %v (%v)", timestamp)
// Use a context with cancel to abort the transfer
ctx, cancel := context.WithCancel(context.Background())
// Prepare progress bars
exportProgress := make(chan models.Progress)
importProgress := make(chan models.Progress)
var wg sync.WaitGroup
progress := progressBar(&wg, exportProgress, importProgress)
progressChan := make(chan models.TransferProgress)
progress := setupProgressBars(ctx, progressChan)
wg := &sync.WaitGroup{}
// Export from source
exportChan := make(chan R, 1000)
go exp.Process(timestamp, exportChan, exportProgress)
go exp.Process(ctx, wg, timestamp, exportChan, progressChan)
// Import into target
resultChan := make(chan models.ImportResult)
go imp.Process(exportChan, resultChan, importProgress)
go imp.Process(ctx, wg, exportChan, resultChan, progressChan)
result := <-resultChan
if timestamp.After(result.LastTimestamp) {
result.LastTimestamp = timestamp
// If the import has errored, the context can be cancelled immediately
if result.Error != nil {
cancel()
} else {
defer cancel()
}
// Wait for all goroutines to finish
wg.Wait()
progress.Wait()
progress.close()
// Update timestamp
err = c.updateTimestamp(&result, timestamp)
if err != nil {
return err
}
fmt.Println(i18n.Tr("Imported %v of %v %s into %v.",
result.ImportCount, result.TotalCount, c.entity, c.targetName))
if result.Error != nil {
printTimestamp("Import failed, last reported timestamp was %v (%s)", result.LastTimestamp)
return result.Error
}
fmt.Println(i18n.Tr("Imported %v of %v %s into %v.",
result.ImportCount, result.TotalCount, c.entity, c.targetName))
// Update timestamp
err = c.updateTimestamp(result, timestamp)
if err != nil {
return err
}
// Print errors
if len(result.ImportLog) > 0 {
fmt.Println()
fmt.Println(i18n.Tr("Import log:"))
for _, entry := range result.ImportLog {
if entry.Type != models.Output {
fmt.Println(i18n.Tr("%v: %v", entry.Type, entry.Message))
} else {
fmt.Println(entry.Message)
}
}
}
@ -179,7 +194,7 @@ func (c *TransferCmd[E, I, R]) timestamp() (time.Time, error) {
return time.Time{}, errors.New(i18n.Tr("invalid timestamp string \"%v\"", flagValue))
}
func (c *TransferCmd[E, I, R]) updateTimestamp(result models.ImportResult, oldTimestamp time.Time) error {
func (c *TransferCmd[E, I, R]) updateTimestamp(result *models.ImportResult, oldTimestamp time.Time) error {
if oldTimestamp.After(result.LastTimestamp) {
result.LastTimestamp = oldTimestamp
}

View file

@ -19,7 +19,6 @@ import (
"errors"
"fmt"
"os"
"path"
"path/filepath"
"regexp"
"strings"
@ -40,7 +39,7 @@ const (
func DefaultConfigDir() string {
configDir, err := os.UserConfigDir()
cobra.CheckErr(err)
return path.Join(configDir, version.AppName)
return filepath.Join(configDir, version.AppName)
}
// initConfig reads in config file and ENV variables if set.

View file

@ -0,0 +1,215 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
package listenbrainz
import (
"encoding/json"
"errors"
"io"
"iter"
"regexp"
"sort"
"strconv"
"time"
"go.uploadedlobster.com/scotty/internal/models"
"go.uploadedlobster.com/scotty/pkg/archive"
)
// Represents a ListenBrainz export archive.
//
// The export contains the user's listen history, favorite tracks and
// user information.
type ExportArchive struct {
backend archive.ArchiveReader
}
// Open a ListenBrainz archive from file path.
func OpenExportArchive(path string) (*ExportArchive, error) {
backend, err := archive.OpenArchive(path)
if err != nil {
return nil, err
}
return &ExportArchive{backend: backend}, nil
}
// Close the archive and release any resources.
func (a *ExportArchive) Close() error {
if a.backend == nil {
return nil
}
return a.backend.Close()
}
// Read the user information from the archive.
func (a *ExportArchive) UserInfo() (UserInfo, error) {
f, err := a.backend.Open("user.json")
if err != nil {
return UserInfo{}, err
}
defer f.Close()
userInfo := UserInfo{}
bytes, err := io.ReadAll(f)
if err != nil {
return userInfo, err
}
json.Unmarshal(bytes, &userInfo)
return userInfo, nil
}
func (a *ExportArchive) ListListenExports() ([]ListenExportFileInfo, error) {
re := regexp.MustCompile(`^listens/(\d{4})/(\d{1,2})\.jsonl$`)
result := make([]ListenExportFileInfo, 0)
files, err := a.backend.Glob("listens/*/*.jsonl")
if err != nil {
return nil, err
}
for _, file := range files {
match := re.FindStringSubmatch(file.Name)
if match == nil {
continue
}
year := match[1]
month := match[2]
times, err := getMonthTimeRange(year, month)
if err != nil {
return nil, err
}
info := ListenExportFileInfo{
Name: file.Name,
TimeRange: *times,
f: file.File,
}
result = append(result, info)
}
return result, nil
}
// Yields all listens from the archive that are newer than the given timestamp.
// The listens are yielded in ascending order of their listened_at timestamp.
func (a *ExportArchive) IterListens(minTimestamp time.Time) iter.Seq2[Listen, error] {
return func(yield func(Listen, error) bool) {
files, err := a.ListListenExports()
if err != nil {
yield(Listen{}, err)
return
}
sort.Slice(files, func(i, j int) bool {
return files[i].TimeRange.Start.Before(files[j].TimeRange.Start)
})
for _, file := range files {
if file.TimeRange.End.Before(minTimestamp) {
continue
}
f := models.JSONLFile[Listen]{File: file.f}
for l, err := range f.IterItems() {
if err != nil {
yield(Listen{}, err)
return
}
if !time.Unix(l.ListenedAt, 0).After(minTimestamp) {
continue
}
if !yield(l, nil) {
break
}
}
}
}
}
// Yields all feedbacks from the archive that are newer than the given timestamp.
// The feedbacks are yielded in ascending order of their Created timestamp.
func (a *ExportArchive) IterFeedback(minTimestamp time.Time) iter.Seq2[Feedback, error] {
return func(yield func(Feedback, error) bool) {
files, err := a.backend.Glob("feedback.jsonl")
if err != nil {
yield(Feedback{}, err)
return
} else if len(files) == 0 {
yield(Feedback{}, errors.New("no feedback.jsonl file found in archive"))
return
}
j := models.JSONLFile[Feedback]{File: files[0].File}
for l, err := range j.IterItems() {
if err != nil {
yield(Feedback{}, err)
return
}
if !time.Unix(l.Created, 0).After(minTimestamp) {
continue
}
if !yield(l, nil) {
break
}
}
}
}
type UserInfo struct {
ID string `json:"user_id"`
Name string `json:"username"`
}
type timeRange struct {
Start time.Time
End time.Time
}
type ListenExportFileInfo struct {
Name string
TimeRange timeRange
f archive.OpenableFile
}
func getMonthTimeRange(year string, month string) (*timeRange, error) {
yearInt, err := strconv.Atoi(year)
if err != nil {
return nil, err
}
monthInt, err := strconv.Atoi(month)
if err != nil {
return nil, err
}
r := &timeRange{}
r.Start = time.Date(yearInt, time.Month(monthInt), 1, 0, 0, 0, 0, time.UTC)
// Get the end of the month
nextMonth := monthInt + 1
r.End = time.Date(
yearInt, time.Month(nextMonth), 1, 0, 0, 0, 0, time.UTC).Add(-time.Second)
return r, nil
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,12 +22,13 @@ THE SOFTWARE.
package listenbrainz
import (
"context"
"errors"
"strconv"
"time"
"github.com/go-resty/resty/v2"
"go.uploadedlobster.com/scotty/internal/version"
"go.uploadedlobster.com/mbtypes"
"go.uploadedlobster.com/scotty/pkg/ratelimit"
)
@ -43,13 +44,13 @@ type Client struct {
MaxResults int
}
func NewClient(token string) Client {
func NewClient(token string, userAgent string) Client {
client := resty.New()
client.SetBaseURL(listenBrainzBaseURL)
client.SetAuthScheme("Token")
client.SetAuthToken(token)
client.SetHeader("Accept", "application/json")
client.SetHeader("User-Agent", version.UserAgent())
client.SetHeader("User-Agent", userAgent)
// Handle rate limiting (see https://listenbrainz.readthedocs.io/en/latest/users/api/index.html#rate-limiting)
ratelimit.EnableHTTPHeaderRateLimit(client, "X-RateLimit-Reset-In")
@ -60,10 +61,11 @@ func NewClient(token string) Client {
}
}
func (c Client) GetListens(user string, maxTime time.Time, minTime time.Time) (result GetListensResult, err error) {
func (c Client) GetListens(ctx context.Context, user string, maxTime time.Time, minTime time.Time) (result GetListensResult, err error) {
const path = "/user/{username}/listens"
errorResult := ErrorResult{}
response, err := c.HTTPClient.R().
SetContext(ctx).
SetPathParam("username", user).
SetQueryParams(map[string]string{
"max_ts": strconv.FormatInt(maxTime.Unix(), 10),
@ -81,10 +83,11 @@ func (c Client) GetListens(user string, maxTime time.Time, minTime time.Time) (r
return
}
func (c Client) SubmitListens(listens ListenSubmission) (result StatusResult, err error) {
func (c Client) SubmitListens(ctx context.Context, listens ListenSubmission) (result StatusResult, err error) {
const path = "/submit-listens"
errorResult := ErrorResult{}
response, err := c.HTTPClient.R().
SetContext(ctx).
SetBody(listens).
SetResult(&result).
SetError(&errorResult).
@ -97,10 +100,11 @@ func (c Client) SubmitListens(listens ListenSubmission) (result StatusResult, er
return
}
func (c Client) GetFeedback(user string, status int, offset int) (result GetFeedbackResult, err error) {
func (c Client) GetFeedback(ctx context.Context, user string, status int, offset int) (result GetFeedbackResult, err error) {
const path = "/feedback/user/{username}/get-feedback"
errorResult := ErrorResult{}
response, err := c.HTTPClient.R().
SetContext(ctx).
SetPathParam("username", user).
SetQueryParams(map[string]string{
"status": strconv.Itoa(status),
@ -119,10 +123,11 @@ func (c Client) GetFeedback(user string, status int, offset int) (result GetFeed
return
}
func (c Client) SendFeedback(feedback Feedback) (result StatusResult, err error) {
func (c Client) SendFeedback(ctx context.Context, feedback Feedback) (result StatusResult, err error) {
const path = "/feedback/recording-feedback"
errorResult := ErrorResult{}
response, err := c.HTTPClient.R().
SetContext(ctx).
SetBody(feedback).
SetResult(&result).
SetError(&errorResult).
@ -135,10 +140,11 @@ func (c Client) SendFeedback(feedback Feedback) (result StatusResult, err error)
return
}
func (c Client) Lookup(recordingName string, artistName string) (result LookupResult, err error) {
func (c Client) Lookup(ctx context.Context, recordingName string, artistName string) (result LookupResult, err error) {
const path = "/metadata/lookup"
errorResult := ErrorResult{}
response, err := c.HTTPClient.R().
SetContext(ctx).
SetQueryParams(map[string]string{
"recording_name": recordingName,
"artist_name": artistName,
@ -153,3 +159,24 @@ func (c Client) Lookup(recordingName string, artistName string) (result LookupRe
}
return
}
func (c Client) MetadataRecordings(ctx context.Context, mbids []mbtypes.MBID) (result RecordingMetadataResult, err error) {
const path = "/metadata/recording/"
errorResult := ErrorResult{}
body := RecordingMetadataRequest{
RecordingMBIDs: mbids,
Includes: "artist release",
}
response, err := c.HTTPClient.R().
SetContext(ctx).
SetBody(body).
SetResult(&result).
SetError(&errorResult).
Post(path)
if !response.IsSuccess() {
err = errors.New(errorResult.Error)
return
}
return
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,6 +22,7 @@ THE SOFTWARE.
package listenbrainz_test
import (
"context"
"net/http"
"testing"
"time"
@ -30,12 +31,12 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uploadedlobster.com/mbtypes"
"go.uploadedlobster.com/scotty/internal/backends/listenbrainz"
"go.uploadedlobster.com/scotty/internal/listenbrainz"
)
func TestNewClient(t *testing.T) {
token := "foobar123"
client := listenbrainz.NewClient(token)
client := listenbrainz.NewClient(token, "test/1.0")
assert.Equal(t, token, client.HTTPClient.Token)
assert.Equal(t, listenbrainz.DefaultItemsPerGet, client.MaxResults)
}
@ -43,13 +44,15 @@ func TestNewClient(t *testing.T) {
func TestGetListens(t *testing.T) {
defer httpmock.DeactivateAndReset()
client := listenbrainz.NewClient("thetoken")
client := listenbrainz.NewClient("thetoken", "test/1.0")
client.MaxResults = 2
setupHTTPMock(t, client.HTTPClient.GetClient(),
"https://api.listenbrainz.org/1/user/outsidecontext/listens",
"testdata/listens.json")
result, err := client.GetListens("outsidecontext", time.Now(), time.Now().Add(-2*time.Hour))
ctx := context.Background()
result, err := client.GetListens(ctx, "outsidecontext",
time.Now(), time.Now().Add(-2*time.Hour))
require.NoError(t, err)
assert := assert.New(t)
@ -61,7 +64,7 @@ func TestGetListens(t *testing.T) {
}
func TestSubmitListens(t *testing.T) {
client := listenbrainz.NewClient("thetoken")
client := listenbrainz.NewClient("thetoken", "test/1.0")
httpmock.ActivateNonDefault(client.HTTPClient.GetClient())
responder, err := httpmock.NewJsonResponder(200, listenbrainz.StatusResult{
@ -92,8 +95,8 @@ func TestSubmitListens(t *testing.T) {
},
},
}
result, err := client.SubmitListens(listens)
require.NoError(t, err)
ctx := context.Background()
result, err := client.SubmitListens(ctx, listens)
assert.Equal(t, "ok", result.Status)
}
@ -101,13 +104,14 @@ func TestSubmitListens(t *testing.T) {
func TestGetFeedback(t *testing.T) {
defer httpmock.DeactivateAndReset()
client := listenbrainz.NewClient("thetoken")
client := listenbrainz.NewClient("thetoken", "test/1.0")
client.MaxResults = 2
setupHTTPMock(t, client.HTTPClient.GetClient(),
"https://api.listenbrainz.org/1/feedback/user/outsidecontext/get-feedback",
"testdata/feedback.json")
result, err := client.GetFeedback("outsidecontext", 1, 3)
ctx := context.Background()
result, err := client.GetFeedback(ctx, "outsidecontext", 1, 0)
require.NoError(t, err)
assert := assert.New(t)
@ -119,7 +123,7 @@ func TestGetFeedback(t *testing.T) {
}
func TestSendFeedback(t *testing.T) {
client := listenbrainz.NewClient("thetoken")
client := listenbrainz.NewClient("thetoken", "test/1.0")
httpmock.ActivateNonDefault(client.HTTPClient.GetClient())
responder, err := httpmock.NewJsonResponder(200, listenbrainz.StatusResult{
@ -135,7 +139,8 @@ func TestSendFeedback(t *testing.T) {
RecordingMBID: "c0a1fc94-5f04-4a5f-bc09-e5de0c49cd12",
Score: 1,
}
result, err := client.SendFeedback(feedback)
ctx := context.Background()
result, err := client.SendFeedback(ctx, feedback)
require.NoError(t, err)
assert.Equal(t, "ok", result.Status)
@ -144,12 +149,13 @@ func TestSendFeedback(t *testing.T) {
func TestLookup(t *testing.T) {
defer httpmock.DeactivateAndReset()
client := listenbrainz.NewClient("thetoken")
client := listenbrainz.NewClient("thetoken", "test/1.0")
setupHTTPMock(t, client.HTTPClient.GetClient(),
"https://api.listenbrainz.org/1/metadata/lookup",
"testdata/lookup.json")
result, err := client.Lookup("Paradise Lost", "Say Just Words")
ctx := context.Background()
result, err := client.Lookup(ctx, "Paradise Lost", "Say Just Words")
require.NoError(t, err)
assert := assert.New(t)

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -55,7 +55,7 @@ type ListenSubmission struct {
}
type Listen struct {
InsertedAt int64 `json:"inserted_at,omitempty"`
InsertedAt float64 `json:"inserted_at,omitempty"`
ListenedAt int64 `json:"listened_at"`
RecordingMSID string `json:"recording_msid,omitempty"`
UserName string `json:"user_name,omitempty"`
@ -66,21 +66,24 @@ type Track struct {
TrackName string `json:"track_name,omitempty"`
ArtistName string `json:"artist_name,omitempty"`
ReleaseName string `json:"release_name,omitempty"`
RecordingMSID string `json:"recording_msid,omitempty"`
AdditionalInfo map[string]any `json:"additional_info,omitempty"`
MBIDMapping *MBIDMapping `json:"mbid_mapping,omitempty"`
}
type MBIDMapping struct {
RecordingName string `json:"recording_name,omitempty"`
RecordingMBID mbtypes.MBID `json:"recording_mbid,omitempty"`
ReleaseMBID mbtypes.MBID `json:"release_mbid,omitempty"`
ArtistMBIDs []mbtypes.MBID `json:"artist_mbids,omitempty"`
Artists []Artist `json:"artists,omitempty"`
RecordingMBID mbtypes.MBID `json:"recording_mbid,omitempty"`
RecordingName string `json:"recording_name,omitempty"`
ReleaseMBID mbtypes.MBID `json:"release_mbid,omitempty"`
CAAID int `json:"caa_id,omitempty"`
CAAReleaseMBID mbtypes.MBID `json:"caa_release_mbid,omitempty"`
}
type Artist struct {
ArtistCreditName string `json:"artist_credit_name,omitempty"`
ArtistMBID string `json:"artist_mbid,omitempty"`
ArtistMBID mbtypes.MBID `json:"artist_mbid,omitempty"`
JoinPhrase string `json:"join_phrase,omitempty"`
}
@ -109,6 +112,44 @@ type LookupResult struct {
ArtistMBIDs []mbtypes.MBID `json:"artist_mbids"`
}
type RecordingMetadataRequest struct {
RecordingMBIDs []mbtypes.MBID `json:"recording_mbids"`
Includes string `json:"inc,omitempty"`
}
// Result for a recording metadata lookup
type RecordingMetadataResult map[mbtypes.MBID]RecordingMetadata
type RecordingMetadata struct {
Artist struct {
Name string `json:"name"`
ArtistCreditID int `json:"artist_credit_id"`
Artists []struct {
Name string `json:"name"`
Area string `json:"area"`
ArtistMBID mbtypes.MBID `json:"artist_mbid"`
JoinPhrase string `json:"join_phrase"`
BeginYear int `json:"begin_year"`
Type string `json:"type"`
// todo rels
} `json:"artists"`
} `json:"artist"`
Recording struct {
Name string `json:"name"`
Length int `json:"length"`
// TODO rels
} `json:"recording"`
Release struct {
Name string `json:"name"`
AlbumArtistName string `json:"album_artist_name"`
Year int `json:"year"`
MBID mbtypes.MBID `json:"mbid"`
ReleaseGroupMBID mbtypes.MBID `json:"release_group_mbid"`
CAAID int `json:"caa_id"`
CAAReleaseMBID mbtypes.MBID `json:"caa_release_mbid"`
} `json:"release"`
}
type StatusResult struct {
Status string `json:"status"`
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -29,7 +29,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uploadedlobster.com/mbtypes"
"go.uploadedlobster.com/scotty/internal/backends/listenbrainz"
"go.uploadedlobster.com/scotty/internal/listenbrainz"
)
func TestTrackDurationMillisecondsInt(t *testing.T) {

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
@ -17,6 +17,7 @@ Scotty. If not, see <https://www.gnu.org/licenses/>.
package models
import (
"context"
"time"
// "go.uploadedlobster.com/scotty/internal/auth"
@ -34,6 +35,9 @@ type Backend interface {
// Return configuration options
Options() []BackendOption
// Free all resources of the backend
Close()
}
type ImportBackend interface {
@ -45,7 +49,7 @@ type ImportBackend interface {
// The implementation can perform all steps here to finalize the
// export/import and free used resources.
FinishImport() error
FinishImport(result *ImportResult) error
}
// Must be implemented by services supporting the export of listens.
@ -55,7 +59,7 @@ type ListensExport interface {
// Returns a list of all listens newer then oldestTimestamp.
// The returned list of listens is supposed to be ordered by the
// Listen.ListenedAt timestamp, with the oldest entry first.
ExportListens(oldestTimestamp time.Time, results chan ListensResult, progress chan Progress)
ExportListens(ctx context.Context, oldestTimestamp time.Time, results chan ListensResult, progress chan TransferProgress)
}
// Must be implemented by services supporting the import of listens.
@ -63,7 +67,7 @@ type ListensImport interface {
ImportBackend
// Imports the given list of listens.
ImportListens(export ListensResult, importResult ImportResult, progress chan Progress) (ImportResult, error)
ImportListens(ctx context.Context, export ListensResult, importResult ImportResult, progress chan TransferProgress) (ImportResult, error)
}
// Must be implemented by services supporting the export of loves.
@ -73,7 +77,7 @@ type LovesExport interface {
// Returns a list of all loves newer then oldestTimestamp.
// The returned list of listens is supposed to be ordered by the
// Love.Created timestamp, with the oldest entry first.
ExportLoves(oldestTimestamp time.Time, results chan LovesResult, progress chan Progress)
ExportLoves(ctx context.Context, oldestTimestamp time.Time, results chan LovesResult, progress chan TransferProgress)
}
// Must be implemented by services supporting the import of loves.
@ -81,5 +85,5 @@ type LovesImport interface {
ImportBackend
// Imports the given list of loves.
ImportLoves(export LovesResult, importResult ImportResult, progress chan Progress) (ImportResult, error)
ImportLoves(ctx context.Context, export LovesResult, importResult ImportResult, progress chan TransferProgress) (ImportResult, error)
}

65
internal/models/jsonl.go Normal file
View file

@ -0,0 +1,65 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
This file is part of Scotty.
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later version.
Scotty is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
Scotty. If not, see <https://www.gnu.org/licenses/>.
*/
package models
import (
"errors"
"iter"
"github.com/simonfrey/jsonl"
"go.uploadedlobster.com/scotty/pkg/archive"
)
type JSONLFile[T any] struct {
File archive.OpenableFile
}
func (f *JSONLFile[T]) openReader() (*jsonl.Reader, error) {
if f.File == nil {
return nil, errors.New("file not set")
}
fio, err := f.File.Open()
if err != nil {
return nil, err
}
reader := jsonl.NewReader(fio)
return &reader, nil
}
func (f *JSONLFile[T]) IterItems() iter.Seq2[T, error] {
return func(yield func(T, error) bool) {
reader, err := f.openReader()
if err != nil {
var listen T
yield(listen, err)
return
}
defer reader.Close()
for {
var listen T
err := reader.ReadSingleLine(&listen)
if err != nil {
break
}
if !yield(listen, nil) {
break
}
}
}
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,6 +22,7 @@ THE SOFTWARE.
package models
import (
"iter"
"strings"
"time"
@ -168,6 +169,7 @@ type LovesResult ExportResult[LovesList]
type LogEntryType string
const (
Output LogEntryType = ""
Info LogEntryType = "Info"
Warning LogEntryType = "Warning"
Error LogEntryType = "Error"
@ -195,11 +197,21 @@ func (i *ImportResult) UpdateTimestamp(newTime time.Time) {
}
}
func (i *ImportResult) Update(from ImportResult) {
func (i *ImportResult) Update(from *ImportResult) {
if i != from {
i.TotalCount = from.TotalCount
i.ImportCount = from.ImportCount
i.UpdateTimestamp(from.LastTimestamp)
i.ImportLog = append(i.ImportLog, from.ImportLog...)
}
}
func (i *ImportResult) Copy() ImportResult {
return ImportResult{
TotalCount: i.TotalCount,
ImportCount: i.ImportCount,
LastTimestamp: i.LastTimestamp,
}
}
func (i *ImportResult) Log(t LogEntryType, msg string) {
@ -209,10 +221,25 @@ func (i *ImportResult) Log(t LogEntryType, msg string) {
})
}
type TransferProgress struct {
Export *Progress
Import *Progress
}
func (p TransferProgress) FromImportResult(result ImportResult, completed bool) TransferProgress {
importProgress := Progress{
Completed: completed,
}.FromImportResult(result)
p.Import = &importProgress
return p
}
type Progress struct {
TotalItems int
Total int64
Elapsed int64
Completed bool
Aborted bool
}
func (p Progress) FromImportResult(result ImportResult) Progress {
@ -221,8 +248,48 @@ func (p Progress) FromImportResult(result ImportResult) Progress {
return p
}
func (p Progress) Complete() Progress {
func (p *Progress) Complete() {
p.Elapsed = p.Total
p.Completed = true
return p
}
func (p *Progress) Abort() {
p.Aborted = true
}
func IterExportProgress[T any](
items []T, t *TransferProgress, c chan TransferProgress,
) iter.Seq2[int, T] {
return iterProgress(items, t, t.Export, c, true)
}
func IterImportProgress[T any](
items []T, t *TransferProgress, c chan TransferProgress,
) iter.Seq2[int, T] {
return iterProgress(items, t, t.Import, c, false)
}
func iterProgress[T any](
items []T, t *TransferProgress,
p *Progress, c chan TransferProgress,
autocomplete bool,
) iter.Seq2[int, T] {
// Report progress in 1% steps
steps := max(len(items)/100, 1)
return func(yield func(int, T) bool) {
for i, item := range items {
if !yield(i, item) {
return
}
p.Elapsed++
if i%steps == 0 {
c <- *t
}
}
if autocomplete {
p.Complete()
c <- *t
}
}
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -138,13 +138,31 @@ func TestImportResultUpdate(t *testing.T) {
LastTimestamp: time.Now().Add(1 * time.Hour),
ImportLog: []models.LogEntry{logEntry2},
}
result.Update(newResult)
result.Update(&newResult)
assert.Equal(t, 120, result.TotalCount)
assert.Equal(t, 50, result.ImportCount)
assert.Equal(t, newResult.LastTimestamp, result.LastTimestamp)
assert.Equal(t, []models.LogEntry{logEntry1, logEntry2}, result.ImportLog)
}
func TestImportResultCopy(t *testing.T) {
logEntry := models.LogEntry{
Type: models.Warning,
Message: "foo",
}
result := models.ImportResult{
TotalCount: 100,
ImportCount: 20,
LastTimestamp: time.Now(),
ImportLog: []models.LogEntry{logEntry},
}
copy := result.Copy()
assert.Equal(t, result.TotalCount, copy.TotalCount)
assert.Equal(t, result.ImportCount, copy.ImportCount)
assert.Equal(t, result.LastTimestamp, copy.LastTimestamp)
assert.Empty(t, copy.ImportLog)
}
func TestImportResultLog(t *testing.T) {
result := models.ImportResult{}
result.Log(models.Warning, "foo")

View file

@ -30,4 +30,5 @@ type BackendOption struct {
Type OptionType
Default string
Validate func(string) error
MigrateFrom string
}

View file

@ -42,12 +42,12 @@ var messageKeyToIndex = map[string]int{
"\tbackend: %v": 11,
"\texport: %s": 0,
"\timport: %s\n": 1,
"%v: %v": 48,
"%v: %v": 49,
"Aborted": 8,
"Access token": 19,
"Access token received, you can use %v now.\n": 34,
"Append to file": 21,
"Backend": 42,
"Backend": 43,
"Check for duplicate listens on import (slower)": 24,
"Client ID": 15,
"Client secret": 16,
@ -57,45 +57,46 @@ var messageKeyToIndex = map[string]int{
"Error: OAuth state mismatch": 33,
"Failed reading config: %v": 2,
"File path": 20,
"From timestamp: %v (%v)": 44,
"From timestamp: %v (%v)": 45,
"Ignore listens in incognito mode": 30,
"Ignore skipped listens": 27,
"Ignored duplicate listen %v: \"%v\" by %v (%v)": 25,
"Import failed, last reported timestamp was %v (%s)": 45,
"Import log:": 47,
"Import failed, last reported timestamp was %v (%s)": 47,
"Import log:": 48,
"Imported %v of %v %s into %v.": 46,
"Latest timestamp: %v (%v)": 50,
"Latest timestamp: %v (%v)": 51,
"Minimum playback duration for skipped tracks (seconds)": 31,
"No": 39,
"No": 40,
"Playlist title": 22,
"Saved service %v using backend %v": 5,
"Server URL": 17,
"Service": 41,
"Service": 42,
"Service \"%v\" deleted\n": 9,
"Service name": 3,
"Specify a time zone for the listen timestamps": 28,
"The backend %v requires authentication. Authenticate now?": 6,
"Token received, you can close this window now.": 12,
"Transferring %s from %s to %s…": 43,
"Transferring %s from %s to %s…": 44,
"Unique playlist identifier": 23,
"Updated service %v using backend %v\n": 10,
"User name": 18,
"Visit the URL for authorization: %v": 32,
"Yes": 38,
"Yes": 39,
"a service with this name already exists": 4,
"aborted": 37,
"backend %s does not implement %s": 13,
"done": 37,
"done": 38,
"exporting": 35,
"importing": 36,
"invalid timestamp string \"%v\"": 49,
"key must only consist of A-Za-z0-9_-": 52,
"no configuration file defined, cannot write config": 51,
"no existing service configurations": 40,
"no service configuration \"%v\"": 53,
"invalid timestamp string \"%v\"": 50,
"key must only consist of A-Za-z0-9_-": 53,
"no configuration file defined, cannot write config": 52,
"no existing service configurations": 41,
"no service configuration \"%v\"": 54,
"unknown backend \"%s\"": 14,
}
var deIndex = []uint32{ // 55 elements
var deIndex = []uint32{ // 56 elements
// Entry 0 - 1F
0x00000000, 0x00000013, 0x00000027, 0x00000052,
0x0000005e, 0x0000008d, 0x000000bd, 0x00000104,
@ -107,14 +108,14 @@ var deIndex = []uint32{ // 55 elements
0x0000037e, 0x000003a4, 0x000003b4, 0x000003da,
// Entry 20 - 3F
0x00000418, 0x00000443, 0x0000046d, 0x000004ad,
0x000004b8, 0x000004c3, 0x000004ca, 0x000004cd,
0x000004d2, 0x000004fb, 0x00000503, 0x0000050b,
0x00000534, 0x00000552, 0x0000058f, 0x000005ba,
0x000005c5, 0x000005d2, 0x000005f6, 0x00000619,
0x0000066a, 0x000006a1, 0x000006c8,
} // Size: 244 bytes
0x000004b8, 0x000004c3, 0x000004cf, 0x000004d6,
0x000004d9, 0x000004de, 0x00000507, 0x0000050f,
0x00000517, 0x00000540, 0x0000055e, 0x00000589,
0x000005c6, 0x000005d1, 0x000005de, 0x00000602,
0x00000625, 0x00000676, 0x000006ad, 0x000006d4,
} // Size: 248 bytes
const deData string = "" + // Size: 1736 bytes
const deData string = "" + // Size: 1748 bytes
"\x04\x01\x09\x00\x0e\x02Export: %[1]s\x04\x01\x09\x01\x0a\x0e\x02Import:" +
" %[1]s\x02Fehler beim Lesen der Konfiguration: %[1]v\x02Servicename\x02e" +
"in Service mit diesem Namen existiert bereits\x02Service %[1]v mit dem B" +
@ -134,17 +135,17 @@ const deData string = "" + // Size: 1736 bytes
"inimale Wiedergabedauer für übersprungene Titel (Sekunden)\x02Zur Anmeld" +
"ung folgende URL aufrufen: %[1]v\x02Fehler: OAuth-State stimmt nicht übe" +
"rein\x04\x00\x01\x0a;\x02Zugriffstoken erhalten, %[1]v kann jetzt verwen" +
"det werden.\x02exportiere\x02importiere\x02fertig\x02Ja\x02Nein\x02keine" +
" bestehenden Servicekonfigurationen\x02Service\x02Backend\x02Übertrage %" +
"[1]s von %[2]s nach %[3]s…\x02Ab Zeitstempel: %[1]v (%[2]v)\x02Import fe" +
"hlgeschlagen, letzter Zeitstempel war %[1]v (%[2]s)\x02%[1]v von %[2]v %" +
"[3]s in %[4]v importiert.\x02Importlog:\x02%[1]v: %[2]v\x02ungültiger Ze" +
"itstempel „%[1]v“\x02Letzter Zeitstempel: %[1]v (%[2]v)\x02keine Konfigu" +
"rationsdatei definiert, Konfiguration kann nicht geschrieben werden\x02S" +
"chlüssel darf nur die Zeichen A-Za-z0-9_- beinhalten\x02keine Servicekon" +
"figuration „%[1]v“"
"det werden.\x02exportiere\x02importiere\x02abgebrochen\x02fertig\x02Ja" +
"\x02Nein\x02keine bestehenden Servicekonfigurationen\x02Service\x02Backe" +
"nd\x02Übertrage %[1]s von %[2]s nach %[3]s…\x02Ab Zeitstempel: %[1]v (%[" +
"2]v)\x02%[1]v von %[2]v %[3]s in %[4]v importiert.\x02Import fehlgeschla" +
"gen, letzter Zeitstempel war %[1]v (%[2]s)\x02Importlog:\x02%[1]v: %[2]v" +
"\x02ungültiger Zeitstempel „%[1]v“\x02Letzter Zeitstempel: %[1]v (%[2]v)" +
"\x02keine Konfigurationsdatei definiert, Konfiguration kann nicht geschr" +
"ieben werden\x02Schlüssel darf nur die Zeichen A-Za-z0-9_- beinhalten" +
"\x02keine Servicekonfiguration „%[1]v“"
var enIndex = []uint32{ // 55 elements
var enIndex = []uint32{ // 56 elements
// Entry 0 - 1F
0x00000000, 0x00000013, 0x00000027, 0x00000044,
0x00000051, 0x00000079, 0x000000a1, 0x000000de,
@ -156,14 +157,14 @@ var enIndex = []uint32{ // 55 elements
0x00000307, 0x00000335, 0x00000344, 0x00000365,
// Entry 20 - 3F
0x0000039c, 0x000003c3, 0x000003df, 0x00000412,
0x0000041c, 0x00000426, 0x0000042b, 0x0000042f,
0x00000432, 0x00000455, 0x0000045d, 0x00000465,
0x0000048f, 0x000004ad, 0x000004e6, 0x00000510,
0x0000051c, 0x00000529, 0x0000054a, 0x0000056a,
0x0000059d, 0x000005c2, 0x000005e3,
} // Size: 244 bytes
0x0000041c, 0x00000426, 0x0000042e, 0x00000433,
0x00000437, 0x0000043a, 0x0000045d, 0x00000465,
0x0000046d, 0x00000497, 0x000004b5, 0x000004df,
0x00000518, 0x00000524, 0x00000531, 0x00000552,
0x00000572, 0x000005a5, 0x000005ca, 0x000005eb,
} // Size: 248 bytes
const enData string = "" + // Size: 1507 bytes
const enData string = "" + // Size: 1515 bytes
"\x04\x01\x09\x00\x0e\x02export: %[1]s\x04\x01\x09\x01\x0a\x0e\x02import:" +
" %[1]s\x02Failed reading config: %[1]v\x02Service name\x02a service with" +
" this name already exists\x02Saved service %[1]v using backend %[2]v\x02" +
@ -181,13 +182,14 @@ const enData string = "" + // Size: 1507 bytes
"mps\x02Directory path\x02Ignore listens in incognito mode\x02Minimum pla" +
"yback duration for skipped tracks (seconds)\x02Visit the URL for authori" +
"zation: %[1]v\x02Error: OAuth state mismatch\x04\x00\x01\x0a.\x02Access " +
"token received, you can use %[1]v now.\x02exporting\x02importing\x02done" +
"\x02Yes\x02No\x02no existing service configurations\x02Service\x02Backen" +
"d\x02Transferring %[1]s from %[2]s to %[3]s…\x02From timestamp: %[1]v (%" +
"[2]v)\x02Import failed, last reported timestamp was %[1]v (%[2]s)\x02Imp" +
"orted %[1]v of %[2]v %[3]s into %[4]v.\x02Import log:\x02%[1]v: %[2]v" +
"\x02invalid timestamp string \x22%[1]v\x22\x02Latest timestamp: %[1]v (%" +
"[2]v)\x02no configuration file defined, cannot write config\x02key must " +
"only consist of A-Za-z0-9_-\x02no service configuration \x22%[1]v\x22"
"token received, you can use %[1]v now.\x02exporting\x02importing\x02abor" +
"ted\x02done\x02Yes\x02No\x02no existing service configurations\x02Servic" +
"e\x02Backend\x02Transferring %[1]s from %[2]s to %[3]s…\x02From timestam" +
"p: %[1]v (%[2]v)\x02Imported %[1]v of %[2]v %[3]s into %[4]v.\x02Import " +
"failed, last reported timestamp was %[1]v (%[2]s)\x02Import log:\x02%[1]" +
"v: %[2]v\x02invalid timestamp string \x22%[1]v\x22\x02Latest timestamp: " +
"%[1]v (%[2]v)\x02no configuration file defined, cannot write config\x02k" +
"ey must only consist of A-Za-z0-9_-\x02no service configuration \x22%[1]" +
"v\x22"
// Total table size 3731 bytes (3KiB); checksum: F7951710
// Total table size 3759 bytes (3KiB); checksum: 7B4CF967

View file

@ -368,21 +368,23 @@
"id": "exporting",
"message": "exporting",
"translatorComment": "Copied from source.",
"fuzzy": true,
"translation": "exportiere"
},
{
"id": "importing",
"message": "importing",
"translatorComment": "Copied from source.",
"fuzzy": true,
"translation": "importiere"
},
{
"id": "aborted",
"message": "aborted",
"translation": "abgebrochen"
},
{
"id": "done",
"message": "done",
"translatorComment": "Copied from source.",
"fuzzy": true,
"translation": "fertig"
},
{
@ -462,27 +464,6 @@
}
]
},
{
"id": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"message": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translation": "Import fehlgeschlagen, letzter Zeitstempel war {Arg_1} ({Arg_2})",
"placeholders": [
{
"id": "Arg_1",
"string": "%[1]v",
"type": "",
"underlyingType": "interface{}",
"argNum": 1
},
{
"id": "Arg_2",
"string": "%[2]s",
"type": "",
"underlyingType": "string",
"argNum": 2
}
]
},
{
"id": "Imported {ImportCount} of {TotalCount} {Entity} into {TargetName}.",
"message": "Imported {ImportCount} of {TotalCount} {Entity} into {TargetName}.",
@ -522,6 +503,27 @@
}
]
},
{
"id": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"message": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translation": "Import fehlgeschlagen, letzter Zeitstempel war {Arg_1} ({Arg_2})",
"placeholders": [
{
"id": "Arg_1",
"string": "%[1]v",
"type": "",
"underlyingType": "interface{}",
"argNum": 1
},
{
"id": "Arg_2",
"string": "%[2]s",
"type": "",
"underlyingType": "string",
"argNum": 2
}
]
},
{
"id": "Import log:",
"message": "Import log:",

View file

@ -368,22 +368,24 @@
"id": "exporting",
"message": "exporting",
"translation": "exportiere",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "importing",
"message": "importing",
"translation": "importiere",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "aborted",
"message": "aborted",
"translation": "abgebrochen"
},
{
"id": "done",
"message": "done",
"translation": "fertig",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Yes",
@ -462,27 +464,6 @@
}
]
},
{
"id": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"message": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translation": "Import fehlgeschlagen, letzter Zeitstempel war {Arg_1} ({Arg_2})",
"placeholders": [
{
"id": "Arg_1",
"string": "%[1]v",
"type": "",
"underlyingType": "interface{}",
"argNum": 1
},
{
"id": "Arg_2",
"string": "%[2]s",
"type": "",
"underlyingType": "string",
"argNum": 2
}
]
},
{
"id": "Imported {ImportCount} of {TotalCount} {Entity} into {TargetName}.",
"message": "Imported {ImportCount} of {TotalCount} {Entity} into {TargetName}.",
@ -522,6 +503,27 @@
}
]
},
{
"id": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"message": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translation": "Import fehlgeschlagen, letzter Zeitstempel war {Arg_1} ({Arg_2})",
"placeholders": [
{
"id": "Arg_1",
"string": "%[1]v",
"type": "",
"underlyingType": "interface{}",
"argNum": 1
},
{
"id": "Arg_2",
"string": "%[2]s",
"type": "",
"underlyingType": "string",
"argNum": 2
}
]
},
{
"id": "Import log:",
"message": "Import log:",

View file

@ -15,8 +15,7 @@
"argNum": 1,
"expr": "strings.Join(info.ExportCapabilities, \", \")"
}
],
"fuzzy": true
]
},
{
"id": "import: {ImportCapabilities__}",
@ -32,8 +31,7 @@
"argNum": 1,
"expr": "strings.Join(info.ImportCapabilities, \", \")"
}
],
"fuzzy": true
]
},
{
"id": "Failed reading config: {Err}",
@ -49,22 +47,19 @@
"argNum": 1,
"expr": "err"
}
],
"fuzzy": true
]
},
{
"id": "Service name",
"message": "Service name",
"translation": "Service name",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "a service with this name already exists",
"message": "a service with this name already exists",
"translation": "a service with this name already exists",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Saved service {Name} using backend {Backend}",
@ -88,8 +83,7 @@
"argNum": 2,
"expr": "service.Backend"
}
],
"fuzzy": true
]
},
{
"id": "The backend {Backend} requires authentication. Authenticate now?",
@ -105,8 +99,7 @@
"argNum": 1,
"expr": "service.Backend"
}
],
"fuzzy": true
]
},
{
"id": "Delete the service configuration \"{Service}\"?",
@ -122,15 +115,13 @@
"argNum": 1,
"expr": "service"
}
],
"fuzzy": true
]
},
{
"id": "Aborted",
"message": "Aborted",
"translation": "Aborted",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Service \"{Name}\" deleted",
@ -146,8 +137,7 @@
"argNum": 1,
"expr": "service.Name"
}
],
"fuzzy": true
]
},
{
"id": "Updated service {Name} using backend {Backend}",
@ -171,8 +161,7 @@
"argNum": 2,
"expr": "service.Backend"
}
],
"fuzzy": true
]
},
{
"id": "backend: {Backend}",
@ -188,15 +177,13 @@
"argNum": 1,
"expr": "s.Backend"
}
],
"fuzzy": true
]
},
{
"id": "Token received, you can close this window now.",
"message": "Token received, you can close this window now.",
"translation": "Token received, you can close this window now.",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "backend {Backend} does not implement {InterfaceName}",
@ -220,8 +207,7 @@
"argNum": 2,
"expr": "interfaceName"
}
],
"fuzzy": true
]
},
{
"id": "unknown backend \"{BackendName}\"",
@ -237,78 +223,67 @@
"argNum": 1,
"expr": "backendName"
}
],
"fuzzy": true
]
},
{
"id": "Client ID",
"message": "Client ID",
"translation": "Client ID",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Client secret",
"message": "Client secret",
"translation": "Client secret",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Server URL",
"message": "Server URL",
"translation": "Server URL",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "User name",
"message": "User name",
"translation": "User name",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Access token",
"message": "Access token",
"translation": "Access token",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "File path",
"message": "File path",
"translation": "File path",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Append to file",
"message": "Append to file",
"translation": "Append to file",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Playlist title",
"message": "Playlist title",
"translation": "Playlist title",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Unique playlist identifier",
"message": "Unique playlist identifier",
"translation": "Unique playlist identifier",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Check for duplicate listens on import (slower)",
"message": "Check for duplicate listens on import (slower)",
"translation": "Check for duplicate listens on import (slower)",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Ignored duplicate listen {ListenedAt}: \"{TrackName}\" by {ArtistName} ({RecordingMBID})",
@ -348,50 +323,43 @@
"argNum": 4,
"expr": "l.RecordingMBID"
}
],
"fuzzy": true
]
},
{
"id": "Disable auto correction of submitted listens",
"message": "Disable auto correction of submitted listens",
"translation": "Disable auto correction of submitted listens",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Ignore skipped listens",
"message": "Ignore skipped listens",
"translation": "Ignore skipped listens",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Specify a time zone for the listen timestamps",
"message": "Specify a time zone for the listen timestamps",
"translation": "Specify a time zone for the listen timestamps",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Directory path",
"message": "Directory path",
"translation": "Directory path",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Ignore listens in incognito mode",
"message": "Ignore listens in incognito mode",
"translation": "Ignore listens in incognito mode",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Minimum playback duration for skipped tracks (seconds)",
"message": "Minimum playback duration for skipped tracks (seconds)",
"translation": "Minimum playback duration for skipped tracks (seconds)",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Visit the URL for authorization: {URL}",
@ -407,15 +375,13 @@
"argNum": 1,
"expr": "authURL.URL"
}
],
"fuzzy": true
]
},
{
"id": "Error: OAuth state mismatch",
"message": "Error: OAuth state mismatch",
"translation": "Error: OAuth state mismatch",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Access token received, you can use {Name} now.",
@ -431,64 +397,55 @@
"argNum": 1,
"expr": "service.Name"
}
],
"fuzzy": true
]
},
{
"id": "exporting",
"message": "exporting",
"translation": "exporting",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "importing",
"message": "importing",
"translation": "importing",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "done",
"message": "done",
"translation": "done",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Yes",
"message": "Yes",
"translation": "Yes",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "No",
"message": "No",
"translation": "No",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "no existing service configurations",
"message": "no existing service configurations",
"translation": "no existing service configurations",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Service",
"message": "Service",
"translation": "Service",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Backend",
"message": "Backend",
"translation": "Backend",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Transferring {Entity} from {SourceName} to {TargetName}…",
@ -520,8 +477,7 @@
"argNum": 3,
"expr": "c.targetName"
}
],
"fuzzy": true
]
},
{
"id": "From timestamp: {Arg_1} ({Arg_2})",
@ -543,8 +499,7 @@
"underlyingType": "interface{}",
"argNum": 2
}
],
"fuzzy": true
]
},
{
"id": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
@ -566,8 +521,7 @@
"underlyingType": "string",
"argNum": 2
}
],
"fuzzy": true
]
},
{
"id": "Imported {ImportCount} of {TotalCount} {Entity} into {TargetName}.",
@ -607,15 +561,13 @@
"argNum": 4,
"expr": "c.targetName"
}
],
"fuzzy": true
]
},
{
"id": "Import log:",
"message": "Import log:",
"translation": "Import log:",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "{Type}: {Message}",
@ -639,8 +591,7 @@
"argNum": 2,
"expr": "entry.Message"
}
],
"fuzzy": true
]
},
{
"id": "invalid timestamp string \"{FlagValue}\"",
@ -656,8 +607,7 @@
"argNum": 1,
"expr": "flagValue"
}
],
"fuzzy": true
]
},
{
"id": "Latest timestamp: {Arg_1} ({Arg_2})",
@ -679,22 +629,19 @@
"underlyingType": "interface{}",
"argNum": 2
}
],
"fuzzy": true
]
},
{
"id": "no configuration file defined, cannot write config",
"message": "no configuration file defined, cannot write config",
"translation": "no configuration file defined, cannot write config",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "key must only consist of A-Za-z0-9_-",
"message": "key must only consist of A-Za-z0-9_-",
"translation": "key must only consist of A-Za-z0-9_-",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "no service configuration \"{Name}\"",
@ -710,8 +657,7 @@
"argNum": 1,
"expr": "name"
}
],
"fuzzy": true
]
}
]
}

View file

@ -15,8 +15,7 @@
"argNum": 1,
"expr": "strings.Join(info.ExportCapabilities, \", \")"
}
],
"fuzzy": true
]
},
{
"id": "import: {ImportCapabilities__}",
@ -32,8 +31,7 @@
"argNum": 1,
"expr": "strings.Join(info.ImportCapabilities, \", \")"
}
],
"fuzzy": true
]
},
{
"id": "Failed reading config: {Err}",
@ -49,22 +47,19 @@
"argNum": 1,
"expr": "err"
}
],
"fuzzy": true
]
},
{
"id": "Service name",
"message": "Service name",
"translation": "Service name",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "a service with this name already exists",
"message": "a service with this name already exists",
"translation": "a service with this name already exists",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Saved service {Name} using backend {Backend}",
@ -88,8 +83,7 @@
"argNum": 2,
"expr": "service.Backend"
}
],
"fuzzy": true
]
},
{
"id": "The backend {Backend} requires authentication. Authenticate now?",
@ -105,8 +99,7 @@
"argNum": 1,
"expr": "service.Backend"
}
],
"fuzzy": true
]
},
{
"id": "Delete the service configuration \"{Service}\"?",
@ -122,15 +115,13 @@
"argNum": 1,
"expr": "service"
}
],
"fuzzy": true
]
},
{
"id": "Aborted",
"message": "Aborted",
"translation": "Aborted",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Service \"{Name}\" deleted",
@ -146,8 +137,7 @@
"argNum": 1,
"expr": "service.Name"
}
],
"fuzzy": true
]
},
{
"id": "Updated service {Name} using backend {Backend}",
@ -171,8 +161,7 @@
"argNum": 2,
"expr": "service.Backend"
}
],
"fuzzy": true
]
},
{
"id": "backend: {Backend}",
@ -188,15 +177,13 @@
"argNum": 1,
"expr": "s.Backend"
}
],
"fuzzy": true
]
},
{
"id": "Token received, you can close this window now.",
"message": "Token received, you can close this window now.",
"translation": "Token received, you can close this window now.",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "backend {Backend} does not implement {InterfaceName}",
@ -220,8 +207,7 @@
"argNum": 2,
"expr": "interfaceName"
}
],
"fuzzy": true
]
},
{
"id": "unknown backend \"{BackendName}\"",
@ -237,78 +223,67 @@
"argNum": 1,
"expr": "backendName"
}
],
"fuzzy": true
]
},
{
"id": "Client ID",
"message": "Client ID",
"translation": "Client ID",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Client secret",
"message": "Client secret",
"translation": "Client secret",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Server URL",
"message": "Server URL",
"translation": "Server URL",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "User name",
"message": "User name",
"translation": "User name",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Access token",
"message": "Access token",
"translation": "Access token",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "File path",
"message": "File path",
"translation": "File path",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Append to file",
"message": "Append to file",
"translation": "Append to file",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Playlist title",
"message": "Playlist title",
"translation": "Playlist title",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Unique playlist identifier",
"message": "Unique playlist identifier",
"translation": "Unique playlist identifier",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Check for duplicate listens on import (slower)",
"message": "Check for duplicate listens on import (slower)",
"translation": "Check for duplicate listens on import (slower)",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Ignored duplicate listen {ListenedAt}: \"{TrackName}\" by {ArtistName} ({RecordingMBID})",
@ -348,50 +323,43 @@
"argNum": 4,
"expr": "l.RecordingMBID"
}
],
"fuzzy": true
]
},
{
"id": "Disable auto correction of submitted listens",
"message": "Disable auto correction of submitted listens",
"translation": "Disable auto correction of submitted listens",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Ignore skipped listens",
"message": "Ignore skipped listens",
"translation": "Ignore skipped listens",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Specify a time zone for the listen timestamps",
"message": "Specify a time zone for the listen timestamps",
"translation": "Specify a time zone for the listen timestamps",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Directory path",
"message": "Directory path",
"translation": "Directory path",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Ignore listens in incognito mode",
"message": "Ignore listens in incognito mode",
"translation": "Ignore listens in incognito mode",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Minimum playback duration for skipped tracks (seconds)",
"message": "Minimum playback duration for skipped tracks (seconds)",
"translation": "Minimum playback duration for skipped tracks (seconds)",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Visit the URL for authorization: {URL}",
@ -407,15 +375,13 @@
"argNum": 1,
"expr": "authURL.URL"
}
],
"fuzzy": true
]
},
{
"id": "Error: OAuth state mismatch",
"message": "Error: OAuth state mismatch",
"translation": "Error: OAuth state mismatch",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Access token received, you can use {Name} now.",
@ -431,20 +397,24 @@
"argNum": 1,
"expr": "service.Name"
}
],
"fuzzy": true
]
},
{
"id": "exporting",
"message": "exporting",
"translation": "exporting",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "importing",
"message": "importing",
"translation": "importing",
"translatorComment": "Copied from source."
},
{
"id": "aborted",
"message": "aborted",
"translation": "aborted",
"translatorComment": "Copied from source.",
"fuzzy": true
},
@ -452,43 +422,37 @@
"id": "done",
"message": "done",
"translation": "done",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Yes",
"message": "Yes",
"translation": "Yes",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "No",
"message": "No",
"translation": "No",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "no existing service configurations",
"message": "no existing service configurations",
"translation": "no existing service configurations",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Service",
"message": "Service",
"translation": "Service",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Backend",
"message": "Backend",
"translation": "Backend",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "Transferring {Entity} from {SourceName} to {TargetName}…",
@ -520,8 +484,7 @@
"argNum": 3,
"expr": "c.targetName"
}
],
"fuzzy": true
]
},
{
"id": "From timestamp: {Arg_1} ({Arg_2})",
@ -543,31 +506,7 @@
"underlyingType": "interface{}",
"argNum": 2
}
],
"fuzzy": true
},
{
"id": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"message": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translation": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translatorComment": "Copied from source.",
"placeholders": [
{
"id": "Arg_1",
"string": "%[1]v",
"type": "",
"underlyingType": "interface{}",
"argNum": 1
},
{
"id": "Arg_2",
"string": "%[2]s",
"type": "",
"underlyingType": "string",
"argNum": 2
}
],
"fuzzy": true
]
},
{
"id": "Imported {ImportCount} of {TotalCount} {Entity} into {TargetName}.",
@ -607,15 +546,35 @@
"argNum": 4,
"expr": "c.targetName"
}
],
"fuzzy": true
]
},
{
"id": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"message": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translation": "Import failed, last reported timestamp was {Arg_1} ({Arg_2})",
"translatorComment": "Copied from source.",
"placeholders": [
{
"id": "Arg_1",
"string": "%[1]v",
"type": "",
"underlyingType": "interface{}",
"argNum": 1
},
{
"id": "Arg_2",
"string": "%[2]s",
"type": "",
"underlyingType": "string",
"argNum": 2
}
]
},
{
"id": "Import log:",
"message": "Import log:",
"translation": "Import log:",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "{Type}: {Message}",
@ -639,8 +598,7 @@
"argNum": 2,
"expr": "entry.Message"
}
],
"fuzzy": true
]
},
{
"id": "invalid timestamp string \"{FlagValue}\"",
@ -656,8 +614,7 @@
"argNum": 1,
"expr": "flagValue"
}
],
"fuzzy": true
]
},
{
"id": "Latest timestamp: {Arg_1} ({Arg_2})",
@ -679,22 +636,19 @@
"underlyingType": "interface{}",
"argNum": 2
}
],
"fuzzy": true
]
},
{
"id": "no configuration file defined, cannot write config",
"message": "no configuration file defined, cannot write config",
"translation": "no configuration file defined, cannot write config",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "key must only consist of A-Za-z0-9_-",
"message": "key must only consist of A-Za-z0-9_-",
"translation": "key must only consist of A-Za-z0-9_-",
"translatorComment": "Copied from source.",
"fuzzy": true
"translatorComment": "Copied from source."
},
{
"id": "no service configuration \"{Name}\"",
@ -710,8 +664,7 @@
"argNum": 1,
"expr": "name"
}
],
"fuzzy": true
]
}
]
}

View file

@ -17,7 +17,7 @@ package version
const (
AppName = "scotty"
AppVersion = "0.5.2"
AppVersion = "0.6.0"
AppURL = "https://git.sr.ht/~phw/scotty/"
)

101
pkg/archive/archive.go Normal file
View file

@ -0,0 +1,101 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
// Implements generic access to files inside an archive.
//
// An archive in this context can be any container that holds files.
// In this implementation the archive can be a ZIP file or a directory.
package archive
import (
"fmt"
"io"
"io/fs"
"os"
)
// Generic interface to access files inside an archive.
type ArchiveReader interface {
io.Closer
// Open the file inside the archive identified by the given path.
// The path is relative to the archive's root.
// The caller must call [fs.File.Close] when finished using the file.
Open(path string) (fs.File, error)
// List files inside the archive which satisfy the given glob pattern.
// This method only returns files, not directories.
Glob(pattern string) ([]FileInfo, error)
}
// Open an archive in path.
// The archive can be a ZIP file or a directory. The implementation
// will detect the type of archive and return the appropriate
// implementation of the Archive interface.
func OpenArchive(path string) (ArchiveReader, error) {
fi, err := os.Stat(path)
if err != nil {
return nil, err
}
switch mode := fi.Mode(); {
case mode.IsRegular():
archive := &zipArchive{}
err := archive.OpenArchive(path)
if err != nil {
return nil, err
}
return archive, nil
case mode.IsDir():
archive := &dirArchive{}
err := archive.OpenArchive(path)
if err != nil {
return nil, err
}
return archive, nil
default:
return nil, fmt.Errorf("unsupported file mode: %s", mode)
}
}
// Interface for a file that can be opened when needed.
type OpenableFile interface {
// Open the file for reading.
// The caller is responsible to call [io.ReadCloser.Close] when
// finished reading the file.
Open() (io.ReadCloser, error)
}
// Generic information about a file inside an archive.
// This provides the filename and allows opening the file for reading.
type FileInfo struct {
Name string
File OpenableFile
}
// A openable file in the filesystem.
type filesystemFile struct {
path string
}
func (f *filesystemFile) Open() (io.ReadCloser, error) {
return os.Open(f.path)
}

189
pkg/archive/archive_test.go Normal file
View file

@ -0,0 +1,189 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
package archive_test
import (
"fmt"
"io"
"log"
"slices"
"testing"
"go.uploadedlobster.com/scotty/pkg/archive"
)
func ExampleOpenArchive() {
a, err := archive.OpenArchive("testdata/archive.zip")
if err != nil {
log.Fatal(err)
}
defer a.Close()
files, err := a.Glob("a/*.txt")
for _, fi := range files {
fmt.Println(fi.Name)
f, err := fi.File.Open()
if err != nil {
log.Fatal(err)
}
defer f.Close()
data, err := io.ReadAll(f)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(data))
}
// Output: a/1.txt
// a1
}
var testArchives = []string{
"testdata/archive",
"testdata/archive.zip",
}
func TestGlob(t *testing.T) {
for _, path := range testArchives {
a, err := archive.OpenArchive(path)
if err != nil {
t.Fatal(err)
}
defer a.Close()
files, err := a.Glob("[ab]/1.txt")
if err != nil {
t.Fatal(err)
}
if len(files) != 2 {
t.Errorf("Expected 2 files, got %d", len(files))
}
expectedName := "b/1.txt"
var fileInfo *archive.FileInfo = nil
for _, file := range files {
if file.Name == expectedName {
fileInfo = &file
}
}
if fileInfo == nil {
t.Fatalf("Expected file %q to be found", expectedName)
}
if fileInfo.File == nil {
t.Fatalf("Expected FileInfo to hold an openable File")
}
f, err := fileInfo.File.Open()
if err != nil {
t.Fatal(err)
}
expectedData := "b1\n"
data, err := io.ReadAll(f)
if err != nil {
t.Fatal(err)
}
if string(data) != expectedData {
fmt.Printf("%s: Expected file content to be %q, got %q",
path, expectedData, string(data))
}
}
}
func TestGlobAll(t *testing.T) {
for _, path := range testArchives {
a, err := archive.OpenArchive(path)
if err != nil {
t.Fatal(err)
}
defer a.Close()
files, err := a.Glob("*/*")
if err != nil {
t.Fatal(err)
}
filenames := make([]string, 0, len(files))
for _, f := range files {
fmt.Printf("%v: %v\n", path, f.Name)
filenames = append(filenames, f.Name)
}
slices.Sort(filenames)
expectedFilenames := []string{
"a/1.txt",
"b/1.txt",
"b/2.txt",
}
if !slices.Equal(filenames, expectedFilenames) {
t.Errorf("%s: Expected filenames to be %q, got %q",
path, expectedFilenames, filenames)
}
}
}
func TestOpen(t *testing.T) {
for _, path := range testArchives {
a, err := archive.OpenArchive(path)
if err != nil {
t.Fatal(err)
}
defer a.Close()
f, err := a.Open("b/2.txt")
if err != nil {
t.Fatal(err)
}
expectedData := "b2\n"
data, err := io.ReadAll(f)
if err != nil {
t.Fatal(err)
}
if string(data) != expectedData {
fmt.Printf("%s: Expected file content to be %q, got %q",
path, expectedData, string(data))
}
}
}
func TestOpenError(t *testing.T) {
for _, path := range testArchives {
a, err := archive.OpenArchive(path)
if err != nil {
t.Fatal(err)
}
defer a.Close()
_, err = a.Open("b/3.txt")
if err == nil {
t.Errorf("%s: Expected the Open command to fail", path)
}
}
}

77
pkg/archive/dir.go Normal file
View file

@ -0,0 +1,77 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
package archive
import (
"io/fs"
"os"
"path/filepath"
)
// An implementation of the [ArchiveReader] interface for directories.
type dirArchive struct {
path string
dirFS fs.FS
}
func (a *dirArchive) OpenArchive(path string) error {
a.path = filepath.Clean(path)
a.dirFS = os.DirFS(path)
return nil
}
func (a *dirArchive) Close() error {
return nil
}
// Open opens the named file in the archive.
// [fs.File.Close] must be called to release any associated resources.
func (a *dirArchive) Open(path string) (fs.File, error) {
return a.dirFS.Open(path)
}
func (a *dirArchive) Glob(pattern string) ([]FileInfo, error) {
files, err := fs.Glob(a.dirFS, pattern)
if err != nil {
return nil, err
}
result := make([]FileInfo, 0)
for _, name := range files {
stat, err := fs.Stat(a.dirFS, name)
if err != nil {
return nil, err
}
if stat.IsDir() {
continue
}
fullPath := filepath.Join(a.path, name)
info := FileInfo{
Name: name,
File: &filesystemFile{path: fullPath},
}
result = append(result, info)
}
return result, nil
}

BIN
pkg/archive/testdata/archive.zip vendored Normal file

Binary file not shown.

1
pkg/archive/testdata/archive/a/1.txt vendored Normal file
View file

@ -0,0 +1 @@
a1

1
pkg/archive/testdata/archive/b/1.txt vendored Normal file
View file

@ -0,0 +1 @@
b1

1
pkg/archive/testdata/archive/b/2.txt vendored Normal file
View file

@ -0,0 +1 @@
b2

80
pkg/archive/zip.go Normal file
View file

@ -0,0 +1,80 @@
/*
Copyright © 2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
package archive
import (
"archive/zip"
"io/fs"
"path/filepath"
)
// An implementation of the [ArchiveReader] interface for zip files.
type zipArchive struct {
zip *zip.ReadCloser
}
func (a *zipArchive) OpenArchive(path string) error {
zip, err := zip.OpenReader(path)
if err != nil {
return err
}
a.zip = zip
return nil
}
func (a *zipArchive) Close() error {
if a.zip == nil {
return nil
}
return a.zip.Close()
}
func (a *zipArchive) Glob(pattern string) ([]FileInfo, error) {
result := make([]FileInfo, 0)
for _, file := range a.zip.File {
if file.FileInfo().IsDir() {
continue
}
if matched, err := filepath.Match(pattern, file.Name); matched {
if err != nil {
return nil, err
}
info := FileInfo{
Name: file.Name,
File: file,
}
result = append(result, info)
}
}
return result, nil
}
func (a *zipArchive) Open(path string) (fs.File, error) {
file, err := a.zip.Open(path)
if err != nil {
return nil, err
}
return file, nil
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -22,7 +22,28 @@ THE SOFTWARE.
package jspf
import "time"
import (
"encoding/json"
"fmt"
"time"
)
// Represents a JSPF extension
type Extension any
// A map of JSPF extensions
type ExtensionMap map[string]Extension
// Parses the extension with the given ID and unmarshals it into "v".
// If the extensions is not found or the data cannot be unmarshalled,
// an error is returned.
func (e ExtensionMap) Get(id string, v any) error {
ext, ok := e[id]
if !ok {
return fmt.Errorf("extension %q not found", id)
}
return unmarshalExtension(ext, v)
}
const (
// The identifier for the MusicBrainz / ListenBrainz JSPF playlist extension
@ -83,3 +104,11 @@ type MusicBrainzTrackExtension struct {
// this document.
AdditionalMetadata map[string]any `json:"additional_metadata,omitempty"`
}
func unmarshalExtension(ext Extension, v any) error {
asJson, err := json.Marshal(ext)
if err != nil {
return err
}
return json.Unmarshal(asJson, v)
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -26,6 +26,7 @@ import (
"bytes"
"fmt"
"log"
"testing"
"time"
"go.uploadedlobster.com/scotty/pkg/jspf"
@ -38,7 +39,7 @@ func ExampleMusicBrainzTrackExtension() {
Tracks: []jspf.Track{
{
Title: "Oweynagat",
Extension: map[string]any{
Extension: jspf.ExtensionMap{
jspf.MusicBrainzTrackExtensionID: jspf.MusicBrainzTrackExtension{
AddedAt: time.Date(2023, 11, 24, 07, 47, 50, 0, time.UTC),
AddedBy: "scotty",
@ -72,3 +73,29 @@ func ExampleMusicBrainzTrackExtension() {
// }
// }
}
func TestExtensionMapGet(t *testing.T) {
ext := jspf.ExtensionMap{
jspf.MusicBrainzTrackExtensionID: jspf.MusicBrainzTrackExtension{
AddedAt: time.Date(2023, 11, 24, 07, 47, 50, 0, time.UTC),
AddedBy: "scotty",
},
}
var trackExt jspf.MusicBrainzTrackExtension
err := ext.Get(jspf.MusicBrainzTrackExtensionID, &trackExt)
if err != nil {
t.Fatal(err)
}
if trackExt.AddedBy != "scotty" {
t.Fatalf("expected 'scotty', got '%s'", trackExt.AddedBy)
}
}
func TestExtensionMapGetNotFound(t *testing.T) {
ext := jspf.ExtensionMap{}
var trackExt jspf.MusicBrainzTrackExtension
err := ext.Get(jspf.MusicBrainzTrackExtensionID, &trackExt)
if err == nil {
t.Fatal("expected ExtensionMap.Get to return an error")
}
}

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -44,7 +44,7 @@ type Playlist struct {
Attribution []Attribution `json:"attribution,omitempty"`
Links []Link `json:"link,omitempty"`
Meta []Meta `json:"meta,omitempty"`
Extension map[string]any `json:"extension,omitempty"`
Extension ExtensionMap `json:"extension,omitempty"`
Tracks []Track `json:"track"`
}
@ -57,10 +57,10 @@ type Track struct {
Info string `json:"info,omitempty"`
Album string `json:"album,omitempty"`
TrackNum int `json:"trackNum,omitempty"`
Duration int `json:"duration,omitempty"`
Duration int64 `json:"duration,omitempty"`
Links []Link `json:"link,omitempty"`
Meta []Meta `json:"meta,omitempty"`
Extension map[string]any `json:"extension,omitempty"`
Extension ExtensionMap `json:"extension,omitempty"`
}
type Attribution map[string]string

View file

@ -1,5 +1,5 @@
/*
Copyright © 2023 Philipp Wolfer <phw@uploadedlobster.com>
Copyright © 2023-2025 Philipp Wolfer <phw@uploadedlobster.com>
Scotty is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
@ -13,6 +13,7 @@ You should have received a copy of the GNU General Public License along with
Scotty. If not, see <https://www.gnu.org/licenses/>.
*/
// Helper functions to set up rate limiting with resty.
package ratelimit
import (
@ -25,8 +26,8 @@ import (
const (
RetryCount = 5
DefaultRateLimitWaitSeconds = 5
MaxWaitTimeSeconds = 60
DefaultRateLimitWait = 5 * time.Second
MaxWaitTime = 60 * time.Second
)
// Implements rate HTTP header based limiting for resty.
@ -46,16 +47,15 @@ func EnableHTTPHeaderRateLimit(client *resty.Client, resetInHeader string) {
return code == http.StatusTooManyRequests || code >= http.StatusInternalServerError
},
)
client.SetRetryMaxWaitTime(time.Duration(MaxWaitTimeSeconds * time.Second))
client.SetRetryMaxWaitTime(MaxWaitTime)
client.SetRetryAfter(func(client *resty.Client, resp *resty.Response) (time.Duration, error) {
var err error
var retryAfter int = DefaultRateLimitWaitSeconds
retryAfter := DefaultRateLimitWait
if resp.StatusCode() == http.StatusTooManyRequests {
retryAfter, err = strconv.Atoi(resp.Header().Get(resetInHeader))
if err != nil {
retryAfter = DefaultRateLimitWaitSeconds
retryAfterHeader, err := strconv.Atoi(resp.Header().Get(resetInHeader))
if err == nil {
retryAfter = time.Duration(retryAfterHeader) * time.Second
}
}
return time.Duration(retryAfter * int(time.Second)), err
return retryAfter, nil
})
}

View file

@ -20,11 +20,18 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
// Package to parse and writer .scrobbler.log files as written by Rockbox.
// Package to parse and write .scrobbler.log files as written by Rockbox.
//
// The parser supports reading version 1.1 and 1.0 of the scrobbler log file
// format. The latter is only supported if encoded in UTF-8.
//
// When written it always writes version 1.1 of the scrobbler log file format,
// which includes the MusicBrainz recording ID as the last field of each row.
//
// See
// - https://www.rockbox.org/wiki/LastFMLog
// - https://git.rockbox.org/cgit/rockbox.git/tree/apps/plugins/lastfm_scrobbler.c
// - https://web.archive.org/web/20110110053056/http://www.audioscrobbler.net/wiki/Portable_Player_Logging
package scrobblerlog
import (
@ -32,6 +39,7 @@ import (
"encoding/csv"
"fmt"
"io"
"iter"
"strconv"
"strings"
"time"
@ -79,53 +87,46 @@ type ScrobblerLog struct {
FallbackTimezone *time.Location
}
// Parses a scrobbler log file from the given reader.
//
// The reader must provide a valid scrobbler log file with a valid header.
// This function implicitly calls [ScrobblerLog.ReadHeader].
func (l *ScrobblerLog) Parse(data io.Reader, ignoreSkipped bool) error {
l.Records = make([]Record, 0)
reader := bufio.NewReader(data)
err := l.ReadHeader(reader)
tsvReader, err := l.initReader(data)
if err != nil {
return err
}
tsvReader := csv.NewReader(reader)
tsvReader.Comma = '\t'
// Row length is often flexible
tsvReader.FieldsPerRecord = -1
for {
// A row is:
// artistName releaseName trackName trackNumber duration rating timestamp recordingMBID
row, err := tsvReader.Read()
if err == io.EOF {
break
} else if err != nil {
return err
}
// fmt.Printf("row: %v\n", row)
// We consider only the last field (recording MBID) optional
if len(row) < 7 {
line, _ := tsvReader.FieldPos(0)
return fmt.Errorf("invalid record in scrobblerlog line %v", line)
}
record, err := l.rowToRecord(row)
for _, err := range l.iterRecords(tsvReader, ignoreSkipped) {
if err != nil {
return err
}
if ignoreSkipped && record.Rating == RatingSkipped {
continue
}
l.Records = append(l.Records, record)
}
return nil
}
// Parses a scrobbler log file from the given reader and returns an iterator over all records.
//
// The reader must provide a valid scrobbler log file with a valid header.
// This function implicitly calls [ScrobblerLog.ReadHeader].
func (l *ScrobblerLog) ParseIter(data io.Reader, ignoreSkipped bool) iter.Seq2[Record, error] {
tsvReader, err := l.initReader(data)
if err != nil {
return func(yield func(Record, error) bool) {
yield(Record{}, err)
}
}
return l.iterRecords(tsvReader, ignoreSkipped)
}
// Append writes the given records to the writer.
//
// The writer should be for an existing scrobbler log file or
// [ScrobblerLog.WriteHeader] should be called before this function.
// Returns the last timestamp of the records written.
func (l *ScrobblerLog) Append(data io.Writer, records []Record) (lastTimestamp time.Time, err error) {
tsvWriter := csv.NewWriter(data)
tsvWriter.Comma = '\t'
@ -153,7 +154,45 @@ func (l *ScrobblerLog) Append(data io.Writer, records []Record) (lastTimestamp t
return
}
func (l *ScrobblerLog) ReadHeader(reader *bufio.Reader) error {
// Parses just the header of a scrobbler log file from the given reader.
//
// This function sets [ScrobblerLog.TZ] and [ScrobblerLog.Client].
func (l *ScrobblerLog) ReadHeader(reader io.Reader) error {
return l.readHeader(bufio.NewReader(reader))
}
// Writes the header of a scrobbler log file to the given writer.
func (l *ScrobblerLog) WriteHeader(writer io.Writer) error {
headers := []string{
"#AUDIOSCROBBLER/1.1\n",
"#TZ/" + string(l.TZ) + "\n",
"#CLIENT/" + l.Client + "\n",
}
for _, line := range headers {
_, err := writer.Write([]byte(line))
if err != nil {
return err
}
}
return nil
}
func (l *ScrobblerLog) initReader(data io.Reader) (*csv.Reader, error) {
reader := bufio.NewReader(data)
err := l.readHeader(reader)
if err != nil {
return nil, err
}
tsvReader := csv.NewReader(reader)
tsvReader.Comma = '\t'
// Row length is often flexible
tsvReader.FieldsPerRecord = -1
return tsvReader, nil
}
func (l *ScrobblerLog) readHeader(reader *bufio.Reader) error {
// Skip header
for i := 0; i < 3; i++ {
line, _, err := reader.ReadLine()
@ -191,36 +230,64 @@ func (l *ScrobblerLog) ReadHeader(reader *bufio.Reader) error {
return nil
}
func (l *ScrobblerLog) WriteHeader(writer io.Writer) error {
headers := []string{
"#AUDIOSCROBBLER/1.1\n",
"#TZ/" + string(l.TZ) + "\n",
"#CLIENT/" + l.Client + "\n",
func (l *ScrobblerLog) iterRecords(reader *csv.Reader, ignoreSkipped bool) iter.Seq2[Record, error] {
return func(yield func(Record, error) bool) {
l.Records = make([]Record, 0)
for {
record, err := l.parseRow(reader)
if err == io.EOF {
break
} else if err != nil {
yield(Record{}, err)
break
}
if ignoreSkipped && record.Rating == RatingSkipped {
continue
}
l.Records = append(l.Records, *record)
if !yield(*record, nil) {
break
}
for _, line := range headers {
_, err := writer.Write([]byte(line))
if err != nil {
return err
}
}
return nil
}
func (l ScrobblerLog) rowToRecord(row []string) (Record, error) {
var record Record
func (l *ScrobblerLog) parseRow(reader *csv.Reader) (*Record, error) {
// A row is:
// artistName releaseName trackName trackNumber duration rating timestamp recordingMBID
row, err := reader.Read()
if err != nil {
return nil, err
}
// fmt.Printf("row: %v\n", row)
// We consider only the last field (recording MBID) optional
// This was added in the 1.1 file format.
if len(row) < 7 {
line, _ := reader.FieldPos(0)
return nil, fmt.Errorf("invalid record in scrobblerlog line %v", line)
}
return l.rowToRecord(row)
}
func (l ScrobblerLog) rowToRecord(row []string) (*Record, error) {
trackNumber, err := strconv.Atoi(row[3])
if err != nil {
return record, err
return nil, err
}
duration, err := strconv.Atoi(row[4])
if err != nil {
return record, err
return nil, err
}
timestamp, err := strconv.ParseInt(row[6], 10, 64)
if err != nil {
return record, err
return nil, err
}
var timezone *time.Location = nil
@ -228,7 +295,7 @@ func (l ScrobblerLog) rowToRecord(row []string) (Record, error) {
timezone = l.FallbackTimezone
}
record = Record{
record := Record{
ArtistName: row[0],
AlbumName: row[1],
TrackName: row[2],
@ -242,7 +309,7 @@ func (l ScrobblerLog) rowToRecord(row []string) (Record, error) {
record.MusicBrainzRecordingID = mbtypes.MBID(row[7])
}
return record, nil
return &record, nil
}
// Convert a Unix timestamp to a [time.Time] object, but treat the timestamp

View file

@ -44,7 +44,14 @@ Kraftwerk Trans-Europe Express The Hall of Mirrors 2 474 S 1260358000 385ba9e9-6
Teeth Agency You Don't Have To Live In Pain Wolfs Jam 2 107 L 1260359404 1262beaf-19f8-4534-b9ed-7eef9ca8e83f
`
func TestParser(t *testing.T) {
var testScrobblerLogInvalid = `#AUDIOSCROBBLER/1.1
#TZ/UNKNOWN
#CLIENT/Rockbox sansaclipplus $Revision$
Özcan Deniz Ses ve Ayrilik Sevdanin rengi (sipacik) byMrTurkey 5 306 L 1260342084
Özcan Deniz Hediye 2@V@7 Bir Dudaktan 1 210 L
`
func TestParse(t *testing.T) {
assert := assert.New(t)
data := bytes.NewBufferString(testScrobblerLog)
result := scrobblerlog.ScrobblerLog{}
@ -68,7 +75,7 @@ func TestParser(t *testing.T) {
record4.MusicBrainzRecordingID)
}
func TestParserIgnoreSkipped(t *testing.T) {
func TestParseIgnoreSkipped(t *testing.T) {
assert := assert.New(t)
data := bytes.NewBufferString(testScrobblerLog)
result := scrobblerlog.ScrobblerLog{}
@ -81,7 +88,7 @@ func TestParserIgnoreSkipped(t *testing.T) {
record4.MusicBrainzRecordingID)
}
func TestParserFallbackTimezone(t *testing.T) {
func TestParseFallbackTimezone(t *testing.T) {
assert := assert.New(t)
data := bytes.NewBufferString(testScrobblerLog)
result := scrobblerlog.ScrobblerLog{
@ -96,6 +103,29 @@ func TestParserFallbackTimezone(t *testing.T) {
)
}
func TestParseInvalid(t *testing.T) {
assert := assert.New(t)
data := bytes.NewBufferString(testScrobblerLogInvalid)
result := scrobblerlog.ScrobblerLog{}
err := result.Parse(data, true)
assert.ErrorContains(err, "invalid record in scrobblerlog line 2")
}
func TestParseIter(t *testing.T) {
assert := assert.New(t)
data := bytes.NewBufferString(testScrobblerLog)
result := scrobblerlog.ScrobblerLog{}
records := make([]scrobblerlog.Record, 0)
for record, err := range result.ParseIter(data, false) {
require.NoError(t, err)
records = append(records, record)
}
assert.Len(records, 5)
record1 := result.Records[0]
assert.Equal("Ses ve Ayrilik", record1.AlbumName)
}
func TestAppend(t *testing.T) {
assert := assert.New(t)
data := make([]byte, 0, 10)