Compare commits

..

85 Commits

Author SHA1 Message Date
github-actions[bot] 97ea3a8124
Merge development into master 2024-06-02 14:21:09 +00:00
Alex Meyer 77302fad21
Fixed throttled_providers.dat reset 2024-05-30 22:16:24 -04:00
Anderson Shindy Oki b7e6de71ff
Fixed bazarr restart traceback exception 2024-05-30 22:08:29 -04:00
JayZed 884200441b
Fix for case insensitive filesystem upates
This fix was made necessary when a library changed the case of one of its files, but kept the name the same.
When the file was updated in place, the case did not change.
The solution is to delete the file first before extracting the new one from the zip file with the changed case.
2024-05-27 21:18:45 -04:00
morpheus65535 0e183c428b Fixed subdivx series search process. #2499 2024-05-27 20:58:29 -04:00
morpheus65535 ebb0cc16b1
no log: Delete libs/apprise/apprise.pyi 2024-05-25 00:19:06 -04:00
morpheus65535 0abf56191c
no log: Delete libs/apprise/apprise.py 2024-05-25 00:18:49 -04:00
morpheus65535 5ca733eac0 Reverted to apprise 1.7.6 to fix an issue with the upgrade process first. 1.8.0 will get back in nightly shortly. #2497 2024-05-24 13:19:37 -04:00
morpheus65535 3e929d8ef9 Fixed upgrade process that was broken since Apprise 1.8.0 update. #2497 2024-05-23 20:46:07 -04:00
dependabot[bot] 07534282a2
no log: Bump @types/lodash from 4.17.0 to 4.17.1 in /frontend (#2495)
Bumps [@types/lodash](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/lodash) from 4.17.0 to 4.17.1.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/lodash)

---
updated-dependencies:
- dependency-name: "@types/lodash"
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-13 22:24:04 -04:00
morpheus65535 d8e58cac83 no log: fixed empty subtitles being saved 2024-05-13 22:11:40 -04:00
morpheus65535 811394cec3
no log: Delete libs/apprise/Apprise.py 2024-05-12 23:16:17 -04:00
morpheus65535 9cb2708909
no log: Delete libs/apprise/Apprise.pyi 2024-05-12 23:16:01 -04:00
morpheus65535 d70a92e947 Fixed uppercase issue in Apprise module name. 2024-05-12 23:13:21 -04:00
morpheus65535 b3a5d43a10 Fixed issue while saving some odd case ASS embedded subtitles. 2024-05-12 10:13:21 -04:00
morpheus65535 fd0a8c3d3b Emergency fix following Apprise 1.8.0 upgrade 2024-05-12 10:11:43 -04:00
morpheus65535 86d34039a3 Updated apprise to 1.8.0 2024-05-11 23:22:55 -04:00
morpheus65535 006ee0f63a Fixed issue with subssabbz provider comparing None with int. 2024-05-10 06:46:50 -04:00
morpheus65535 47011f429a Fixed issue with subsunacs provider comparing None with int. 2024-05-10 06:36:04 -04:00
morpheus65535 485122bfae no log: removing leftover subscene remnants 2024-05-09 15:27:48 -04:00
morpheus65535 bb4b01f3fb Removed closed subscene provider 2024-05-09 15:19:31 -04:00
morpheus65535 f914ed0cbf Merge remote-tracking branch 'origin/development' into development 2024-05-08 23:37:11 -04:00
Anderson Shindy Oki 5b5beadf4d Removed dependency over moment library
* feat: remove moment dependency

* refactor

* add tests

* small format

* rename argument
2024-05-08 23:36:50 -04:00
Anderson Shindy Oki 6e3422524c
Removed dependency over moment
* feat: remove moment dependency

* refactor

* add tests

* small format

* rename argument
2024-05-08 23:35:41 -04:00
morpheus65535 014ba07aea Merge remote-tracking branch 'origin/development' into development 2024-05-08 22:29:34 -04:00
morpheus65535 4815313ac6 Fixed db migrations dropping tables content because of ForeignKey constraints. #2489 2024-05-08 22:29:31 -04:00
Anderson Shindy Oki 397310eff5
no log: Fix husky installation (#2488) 2024-05-07 20:32:17 -04:00
morpheus65535 d686ab71b2 Merge remote-tracking branch 'origin/development' into development 2024-05-06 23:42:17 -04:00
morpheus65535 5630c441b0 Added a database migration to get past the issues with incomplete table_languages_profiles. ##2485 2024-05-06 23:42:02 -04:00
dependabot[bot] d886515f9c
no log: Bump recharts from 2.12.4 to 2.12.6 in /frontend (#2487)
Bumps [recharts](https://github.com/recharts/recharts) from 2.12.4 to 2.12.6.
- [Release notes](https://github.com/recharts/recharts/releases)
- [Changelog](https://github.com/recharts/recharts/blob/3.x/CHANGELOG.md)
- [Commits](https://github.com/recharts/recharts/compare/v2.12.4...v2.12.6)

---
updated-dependencies:
- dependency-name: recharts
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-06 19:06:38 -04:00
Anderson Shindy Oki 970b0f9d47
Added animetosho release info 2024-05-04 13:19:36 -04:00
morpheus65535 0bddb5ba55 no log: pep8 fixes 2024-05-02 22:53:36 -04:00
morpheus65535 2c4ed03817 Fixed HI subtitles identification when downloading and improved some constants. #2386 2024-05-02 22:05:41 -04:00
JayZed bea2f0b781
Fixed embedded ASS subtitles writing encoding error
For a couple of files, I had UnicodeEncodeErrors raised when writing out a file it had successfully read in.
In my case, the output file was truncated to 1 KB.
2024-05-02 06:32:03 -04:00
Wim de With ad151ff139
Added timeout to update check API call 2024-05-01 06:13:55 -04:00
Anderson Shindy Oki 2782551c9b
Fixed Animetosho provider error for tv shows
* chore: Skip anime

* wip
2024-04-30 06:28:41 -04:00
dependabot[bot] 1c2538ef3c
no log: Bump @testing-library/react from 14.3.0 to 15.0.5 in /frontend (#2478)
Bumps [@testing-library/react](https://github.com/testing-library/react-testing-library) from 14.3.0 to 15.0.5.
- [Release notes](https://github.com/testing-library/react-testing-library/releases)
- [Changelog](https://github.com/testing-library/react-testing-library/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testing-library/react-testing-library/compare/v14.3.0...v15.0.5)

---
updated-dependencies:
- dependency-name: "@testing-library/react"
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-29 22:13:28 -04:00
JayZed 5749971d67
Improved whisper provider to not throttle when unsupported audio language is encountered. #2474
As we have noted before, bad input data should be no reason to throttle a provider.
In this case, if the input language was not supported by whisper, we were raising a ValueError that was never caught and causing an error in the whisper provider for which it was throttled.
Instead, we are now detecting this case and logging an error message.
However, given that the input language was not one of the 99 currently known to whisper, it's probably a mislabeled audio track. If the user desired output language is English, then we will tell whisper that the input audio is also English and ask it to transcribe it. Whisper does a very good job of transcribing almost anything to English, so it's worth a try.
This should address the throttling in issue #2474.
2024-04-29 22:11:47 -04:00
morpheus65535 c5a5dc9ddf no log: fixed tasks view when running in dev environment (--no-tasks). 2024-04-29 16:06:34 -04:00
morpheus65535 5429749e72
no log: Update schedule.yaml 2024-03-20 13:22:04 -04:00
morpheus65535 c2ed1cdb58
no log: Update schedule.yaml 2024-03-20 13:20:51 -04:00
github-actions[bot] 56d54e405b
Merge development into master 2024-02-20 00:29:19 +00:00
github-actions[bot] 38094e6323
Merge development into master 2024-02-04 01:30:27 +00:00
morpheus65535 8282899fac Merge branch 'development'
# Conflicts:
#	.github/workflows/ci.yml
2023-11-28 07:28:57 -05:00
github-actions[bot] a09cc34e09
Merge development into master 2023-10-14 12:45:55 +00:00
github-actions[bot] 823f3d8d3f
Merge development into master 2023-09-16 02:44:25 +00:00
Liang Yi 07697fa212
no log: Revert previous commits from dependbot 2023-08-06 19:11:45 +00:00
dependabot[bot] 643c120c91
no log: Bump vite from 4.3.2 to 4.3.9 in /frontend (#2182)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.3.2 to 4.3.9.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.3.9/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 02:55:00 +08:00
dependabot[bot] 1834fbacd9
no log: Bump tough-cookie from 4.1.2 to 4.1.3 in /frontend (#2198)
Bumps [tough-cookie](https://github.com/salesforce/tough-cookie) from 4.1.2 to 4.1.3.
- [Release notes](https://github.com/salesforce/tough-cookie/releases)
- [Changelog](https://github.com/salesforce/tough-cookie/blob/master/CHANGELOG.md)
- [Commits](https://github.com/salesforce/tough-cookie/compare/v4.1.2...v4.1.3)

---
updated-dependencies:
- dependency-name: tough-cookie
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 02:54:43 +08:00
dependabot[bot] 54248ac592
no log: Bump word-wrap from 1.2.3 to 1.2.4 in /frontend (#2204)
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.4)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 02:54:26 +08:00
morpheus65535 ec2d10f195 no log: fix CI 2023-08-04 10:43:10 -04:00
github-actions[bot] 64af56cb80
Merge development into master 2023-07-22 13:49:00 +00:00
github-actions[bot] 3c2f940469
Merge development into master 2023-07-11 00:28:02 +00:00
morpheus65535 77f3ff82d5 no log: fix changelog template 2023-06-24 18:19:57 -04:00
morpheus65535 080710e7e1 Merge branch 'development'
# Conflicts:
#	frontend/package-lock.json
#	frontend/package.json
2023-06-24 18:17:40 -04:00
Liang Yi 38d95c5a7c
no log: Revert "no log: Bump socket.io-parser from 4.2.2 to 4.2.3 in /frontend (#2150)"
This reverts commit e7ce635a86.
2023-05-26 02:52:24 +00:00
dependabot[bot] e7ce635a86
no log: Bump socket.io-parser from 4.2.2 to 4.2.3 in /frontend (#2150)
Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.2 to 4.2.3.
- [Release notes](https://github.com/socketio/socket.io-parser/releases)
- [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io-parser/compare/4.2.2...4.2.3)

---
updated-dependencies:
- dependency-name: socket.io-parser
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-25 23:22:56 +08:00
morpheus65535 b485dd9c71 Merge branch 'development'
# Conflicts:
#	frontend/package-lock.json
2023-05-01 20:39:33 -04:00
dependabot[bot] 1fa4cf6afc
no log: Bump d3-color and recharts in /frontend (#2079)
Bumps [d3-color](https://github.com/d3/d3-color) to 3.1.0 and updates ancestor dependency [recharts](https://github.com/recharts/recharts). These dependencies need to be updated together.


Updates `d3-color` from 2.0.0 to 3.1.0
- [Release notes](https://github.com/d3/d3-color/releases)
- [Commits](https://github.com/d3/d3-color/compare/v2.0.0...v3.1.0)

Updates `recharts` from 2.1.16 to 2.4.3
- [Release notes](https://github.com/recharts/recharts/releases)
- [Changelog](https://github.com/recharts/recharts/blob/master/CHANGELOG.md)
- [Commits](https://github.com/recharts/recharts/compare/v2.1.16...v2.4.3)

---
updated-dependencies:
- dependency-name: d3-color
  dependency-type: indirect
- dependency-name: recharts
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 00:53:45 +08:00
github-actions[bot] 71a2c758b7
Merge development into master 2023-03-03 02:12:52 +00:00
morpheus65535 1836014ad3
Delete announcements.json 2023-02-12 08:10:39 -05:00
morpheus65535 f3507c4d63
Create announcements.json 2023-02-11 09:42:46 -05:00
github-actions[bot] 0c7e422297
Merge development into master 2022-12-31 16:37:03 +00:00
github-actions[bot] 5722085d1e
Merge development into master 2022-12-05 02:33:36 +00:00
github-actions[bot] 70346950fd
Merge development into master 2022-10-15 12:45:09 +00:00
github-actions[bot] 5882fc07d2
Merge development into master 2022-08-31 02:43:49 +00:00
github-actions[bot] e439f2e3ed
Merge development into master 2022-07-02 12:48:11 +00:00
morpheus65535 135bdf2d45 Merge branch 'development'
# Conflicts:
#	frontend/package-lock.json
2022-04-30 09:09:50 -04:00
github-actions[bot] 1a45fa67bc
Merge development into master 2022-02-26 15:03:54 +00:00
dependabot[bot] bd1423891c
no log: Bump nanoid from 3.1.23 to 3.3.1 in /frontend (#1736)
Bumps [nanoid](https://github.com/ai/nanoid) from 3.1.23 to 3.3.1.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.1.23...3.3.1)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-22 22:59:41 -05:00
dependabot[bot] 2a740fb26d
no log: Bump url-parse from 1.5.3 to 1.5.7 in /frontend (#1729)
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.3 to 1.5.7.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.3...1.5.7)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-22 22:56:23 -05:00
github-actions[bot] e305aad597
Merge development into master 2021-12-30 11:52:19 +00:00
github-actions[bot] e1f836dfea
Merge development into master 2021-11-19 01:45:58 +00:00
github-actions[bot] 88b69b5243
Merge development into master 2021-10-12 23:45:07 +00:00
github-actions[bot] ac2052f43d
Merge development into master 2021-09-11 12:47:42 +00:00
github-actions[bot] c66d5662b4
Merge development into master 2021-08-31 16:54:28 +00:00
github-actions[bot] 87bc9ecd29
Merge development into master 2021-08-13 12:06:07 +00:00
github-actions[bot] 80ffdc91b7
Merge development into master 2021-07-19 01:29:47 +00:00
morpheus65535 15d32b61df no log: test action to make sure that Bazarr is starting properly 2021-06-19 10:10:47 -04:00
morpheus65535 5af382e62d no log: test action to make sure that Bazarr is starting properly 2021-06-19 10:03:00 -04:00
morpheus65535 da8d13ff78 no log: test action to make sure that Bazarr is starting properly 2021-06-19 09:48:44 -04:00
Liang Yi 3af4c39e73
no log: Revert wrong commit 2021-05-09 15:27:43 +08:00
Liang Yi 23cf9d4c13
no log: Fix pipeline 2021-05-09 15:26:12 +08:00
github-actions[bot] 2bceafe5af
Merge development into master 2021-05-08 14:06:43 +00:00
github-actions[bot] 1e059d0cee
Merge development into master 2021-04-19 13:27:31 +00:00
53 changed files with 348 additions and 1491 deletions

View File

@ -7,7 +7,6 @@ from flask_restx import Resource, Namespace, fields, marshal
from app.config import settings
from app.logger import empty_log
from app.get_args import args
from utilities.central import get_log_file_path
from ..utils import authenticate

View File

@ -25,7 +25,7 @@ def check_releases():
url_releases = 'https://api.github.com/repos/morpheus65535/Bazarr/releases?per_page=100'
try:
logging.debug(f'BAZARR getting releases from Github: {url_releases}')
r = requests.get(url_releases, allow_redirects=True)
r = requests.get(url_releases, allow_redirects=True, timeout=15)
r.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception("Error trying to get releases from Github. Http error.")
@ -165,6 +165,9 @@ def apply_update():
parent_dir = os.path.dirname(file_path)
os.makedirs(parent_dir, exist_ok=True)
if not os.path.isdir(file_path):
if os.path.exists(file_path):
# remove the file first to handle case-insensitive file systems
os.remove(file_path)
with open(file_path, 'wb+') as f:
f.write(archive.read(file))
except Exception:
@ -229,6 +232,9 @@ def update_cleaner(zipfile, bazarr_dir, config_dir):
dir_to_ignore_regex = re.compile(dir_to_ignore_regex_string)
file_to_ignore = ['nssm.exe', '7za.exe', 'unins000.exe', 'unins000.dat']
# prevent deletion of leftover Apprise.py/pyi files after 1.8.0 version that caused issue on case-insensitive
# filesystem. This could be removed in a couple of major versions.
file_to_ignore += ['Apprise.py', 'Apprise.pyi', 'apprise.py', 'apprise.pyi']
logging.debug(f'BAZARR upgrade leftover cleaner will ignore those files: {", ".join(file_to_ignore)}')
extension_to_ignore = ['.pyc']
logging.debug(

View File

@ -58,7 +58,7 @@ class Validator(OriginalValidator):
def check_parser_binary(value):
try:
get_binary(value)
except BinaryNotFound as e:
except BinaryNotFound:
raise ValidationError(f"Executable '{value}' not found in search path. Please install before making this selection.")
return True
@ -293,10 +293,6 @@ validators = [
Validator('napisy24.username', must_exist=True, default='', is_type_of=str, cast=str),
Validator('napisy24.password', must_exist=True, default='', is_type_of=str, cast=str),
# subscene section
Validator('subscene.username', must_exist=True, default='', is_type_of=str, cast=str),
Validator('subscene.password', must_exist=True, default='', is_type_of=str, cast=str),
# betaseries section
Validator('betaseries.token', must_exist=True, default='', is_type_of=str, cast=str),
@ -686,15 +682,6 @@ def save_settings(settings_items):
reset_providers = True
region.delete('oscom_token')
if key == 'settings-subscene-username':
if key != settings.subscene.username:
reset_providers = True
region.delete('subscene_cookies2')
elif key == 'settings-subscene-password':
if key != settings.subscene.password:
reset_providers = True
region.delete('subscene_cookies2')
if key == 'settings-titlovi-username':
if key != settings.titlovi.username:
reset_providers = True

View File

@ -125,7 +125,7 @@ def provider_throttle_map():
PROVIDERS_FORCED_OFF = ["addic7ed", "tvsubtitles", "legendasdivx", "napiprojekt", "shooter",
"hosszupuska", "supersubtitles", "titlovi", "assrt", "subscene"]
"hosszupuska", "supersubtitles", "titlovi", "assrt"]
throttle_count = {}
@ -259,11 +259,6 @@ def get_providers_auth():
'also_foreign': False, # fixme
'verify_ssl': settings.podnapisi.verify_ssl
},
'subscene': {
'username': settings.subscene.username,
'password': settings.subscene.password,
'only_foreign': False, # fixme
},
'legendasdivx': {
'username': settings.legendasdivx.username,
'password': settings.legendasdivx.password,
@ -501,7 +496,7 @@ def get_throttled_providers():
except Exception:
# set empty content in throttled_providers.dat
logging.error("Invalid content in throttled_providers.dat. Resetting")
set_throttled_providers(providers)
set_throttled_providers(str(providers))
finally:
return providers

View File

@ -11,7 +11,6 @@ from logging.handlers import TimedRotatingFileHandler
from utilities.central import get_log_file_path
from pytz_deprecation_shim import PytzUsageWarning
from .get_args import args
from .config import settings
@ -62,18 +61,18 @@ class UnwantedWaitressMessageFilter(logging.Filter):
if settings.general.debug:
# no filtering in debug mode
return True
unwantedMessages = [
"Exception while serving /api/socket.io/",
['Session is disconnected', 'Session not found' ],
"Exception while serving /api/socket.io/",
["'Session is disconnected'", "'Session not found'" ],
"Exception while serving /api/socket.io/",
['"Session is disconnected"', '"Session not found"' ],
"Exception when servicing %r",
unwantedMessages = [
"Exception while serving /api/socket.io/",
['Session is disconnected', 'Session not found'],
"Exception while serving /api/socket.io/",
["'Session is disconnected'", "'Session not found'"],
"Exception while serving /api/socket.io/",
['"Session is disconnected"', '"Session not found"'],
"Exception when servicing %r",
[],
]

View File

@ -1,6 +1,6 @@
# coding=utf-8
import apprise
from apprise import Apprise, AppriseAsset
import logging
from .database import TableSettingsNotifier, TableEpisodes, TableShows, TableMovies, database, insert, delete, select
@ -8,7 +8,7 @@ from .database import TableSettingsNotifier, TableEpisodes, TableShows, TableMov
def update_notifier():
# define apprise object
a = apprise.Apprise()
a = Apprise()
# Retrieve all the details
results = a.details()
@ -70,9 +70,9 @@ def send_notifications(sonarr_series_id, sonarr_episode_id, message):
if not episode:
return
asset = apprise.AppriseAsset(async_mode=False)
asset = AppriseAsset(async_mode=False)
apobj = apprise.Apprise(asset=asset)
apobj = Apprise(asset=asset)
for provider in providers:
if provider.url is not None:
@ -101,9 +101,9 @@ def send_notifications_movie(radarr_id, message):
else:
movie_year = ''
asset = apprise.AppriseAsset(async_mode=False)
asset = AppriseAsset(async_mode=False)
apobj = apprise.Apprise(asset=asset)
apobj = Apprise(asset=asset)
for provider in providers:
if provider.url is not None:

View File

@ -10,7 +10,6 @@ from apscheduler.triggers.date import DateTrigger
from apscheduler.events import EVENT_JOB_SUBMITTED, EVENT_JOB_EXECUTED, EVENT_JOB_ERROR
from datetime import datetime, timedelta
from calendar import day_name
from math import floor
from random import randrange
from tzlocal import get_localzone
try:
@ -47,6 +46,10 @@ ONE_YEAR_IN_SECONDS = 60 * 60 * 24 * 365
def a_long_time_from_now(job):
# job isn't scheduled at all
if job.next_run_time is None:
return True
# currently defined as more than a year from now
delta = job.next_run_time - datetime.now(job.next_run_time.tzinfo)
return delta.total_seconds() > ONE_YEAR_IN_SECONDS

View File

@ -87,9 +87,9 @@ class Server:
pass
def close_all(self):
print(f"Closing database...")
print("Closing database...")
close_database()
print(f"Closing webserver...")
print("Closing webserver...")
self.server.close()
def shutdown(self, status=EXIT_NORMAL):

View File

@ -12,7 +12,7 @@ from signalrcore.hub_connection_builder import HubConnectionBuilder
from collections import deque
from time import sleep
from constants import headers
from constants import HEADERS
from app.event_handler import event_stream
from sonarr.sync.episodes import sync_episodes, sync_one_episode
from sonarr.sync.series import update_series, update_one_series
@ -39,7 +39,7 @@ class SonarrSignalrClientLegacy:
self.session = Session()
self.session.timeout = 60
self.session.verify = False
self.session.headers = headers
self.session.headers = HEADERS
self.connection = None
self.connected = False
@ -162,7 +162,7 @@ class SonarrSignalrClient:
.with_url(f"{url_sonarr()}/signalr/messages?access_token={self.apikey_sonarr}",
options={
"verify_ssl": False,
"headers": headers
"headers": HEADERS
}) \
.with_automatic_reconnect({
"type": "raw",
@ -229,7 +229,7 @@ class RadarrSignalrClient:
.with_url(f"{url_radarr()}/signalr/messages?access_token={self.apikey_radarr}",
options={
"verify_ssl": False,
"headers": headers
"headers": HEADERS
}) \
.with_automatic_reconnect({
"type": "raw",

View File

@ -9,7 +9,7 @@ from flask import (request, abort, render_template, Response, session, send_file
from functools import wraps
from urllib.parse import unquote
from constants import headers
from constants import HEADERS
from literals import FILE_LOG
from sonarr.info import url_api_sonarr
from radarr.info import url_api_radarr
@ -118,7 +118,7 @@ def series_images(url):
baseUrl = settings.sonarr.base_url
url_image = f'{url_api_sonarr()}{url.lstrip(baseUrl)}?apikey={apikey}'.replace('poster-250', 'poster-500')
try:
req = requests.get(url_image, stream=True, timeout=15, verify=False, headers=headers)
req = requests.get(url_image, stream=True, timeout=15, verify=False, headers=HEADERS)
except Exception:
return '', 404
else:
@ -132,7 +132,7 @@ def movies_images(url):
baseUrl = settings.radarr.base_url
url_image = f'{url_api_radarr()}{url.lstrip(baseUrl)}?apikey={apikey}'
try:
req = requests.get(url_image, stream=True, timeout=15, verify=False, headers=headers)
req = requests.get(url_image, stream=True, timeout=15, verify=False, headers=HEADERS)
except Exception:
return '', 404
else:
@ -173,7 +173,7 @@ def proxy(protocol, url):
url = f'{protocol}://{unquote(url)}'
params = request.args
try:
result = requests.get(url, params, allow_redirects=False, verify=False, timeout=5, headers=headers)
result = requests.get(url, params, allow_redirects=False, verify=False, timeout=5, headers=HEADERS)
except Exception as e:
return dict(status=False, error=repr(e))
else:

View File

@ -1,13 +1,12 @@
# coding=utf-8
import os
import re
# set Bazarr user-agent used to make requests
headers = {"User-Agent": os.environ["SZ_USER_AGENT"]}
# hearing-impaired detection regex
hi_regex = re.compile(r'[*¶♫♪].{3,}[*¶♫♪]|[\[\(\{].{3,}[\]\)\}](?<!{\\an\d})')
HEADERS = {"User-Agent": os.environ["SZ_USER_AGENT"]}
# minimum file size for Bazarr to consider it a video
MINIMUM_VIDEO_SIZE = 20480
# maximum size for a subtitles file
MAXIMUM_SUBTITLE_SIZE = 1 * 1024 * 1024

View File

@ -19,7 +19,8 @@ from utilities.backup import restore_from_backup
from app.database import init_db
from literals import *
from literals import (EXIT_CONFIG_CREATE_ERROR, ENV_BAZARR_ROOT_DIR, DIR_BACKUP, DIR_CACHE, DIR_CONFIG, DIR_DB, DIR_LOG,
DIR_RESTORE, EXIT_REQUIREMENTS_ERROR)
from utilities.central import make_bazarr_dir, restart_bazarr, stop_bazarr
# set start time global variable as epoch

View File

@ -1,7 +1,6 @@
# coding=utf-8
import os
import io
from threading import Thread
@ -42,6 +41,8 @@ from languages.get_languages import load_language_in_db # noqa E402
from app.signalr_client import sonarr_signalr_client, radarr_signalr_client # noqa E402
from app.server import webserver, app # noqa E402
from app.announcements import get_announcements_to_file # noqa E402
from utilities.central import stop_bazarr # noqa E402
from literals import EXIT_NORMAL # noqa E402
if args.create_db_revision:
create_db_revision(app)

View File

@ -5,7 +5,7 @@ import logging
from app.config import settings
from radarr.info import url_api_radarr
from constants import headers
from constants import HEADERS
def browse_radarr_filesystem(path='#'):
@ -16,7 +16,7 @@ def browse_radarr_filesystem(path='#'):
f"includeFiles=false&apikey={settings.radarr.apikey}")
try:
r = requests.get(url_radarr_api_filesystem, timeout=int(settings.radarr.http_timeout), verify=False,
headers=headers)
headers=HEADERS)
r.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception("BAZARR Error trying to get series from Radarr. Http error.")

View File

@ -8,7 +8,7 @@ from requests.exceptions import JSONDecodeError
from dogpile.cache import make_region
from app.config import settings, empty_values
from constants import headers
from constants import HEADERS
region = make_region().configure('dogpile.cache.memory')
@ -30,7 +30,7 @@ class GetRadarrInfo:
try:
rv = f"{url_radarr()}/api/system/status?apikey={settings.radarr.apikey}"
radarr_json = requests.get(rv, timeout=int(settings.radarr.http_timeout), verify=False,
headers=headers).json()
headers=HEADERS).json()
if 'version' in radarr_json:
radarr_version = radarr_json['version']
else:
@ -39,7 +39,7 @@ class GetRadarrInfo:
try:
rv = f"{url_radarr()}/api/v3/system/status?apikey={settings.radarr.apikey}"
radarr_version = requests.get(rv, timeout=int(settings.radarr.http_timeout), verify=False,
headers=headers).json()['version']
headers=HEADERS).json()['version']
except JSONDecodeError:
logging.debug('BAZARR cannot get Radarr version')
radarr_version = 'unknown'

View File

@ -5,7 +5,7 @@ import requests
from app.config import settings
from radarr.info import url_api_radarr
from constants import headers
from constants import HEADERS
def notify_radarr(radarr_id):
@ -15,6 +15,6 @@ def notify_radarr(radarr_id):
'name': 'RescanMovie',
'movieId': int(radarr_id)
}
requests.post(url, json=data, timeout=int(settings.radarr.http_timeout), verify=False, headers=headers)
requests.post(url, json=data, timeout=int(settings.radarr.http_timeout), verify=False, headers=HEADERS)
except Exception:
logging.exception('BAZARR cannot notify Radarr')

View File

@ -8,7 +8,7 @@ from app.config import settings
from utilities.path_mappings import path_mappings
from app.database import TableMoviesRootfolder, TableMovies, database, delete, update, insert, select
from radarr.info import url_api_radarr
from constants import headers
from constants import HEADERS
def get_radarr_rootfolder():
@ -19,7 +19,7 @@ def get_radarr_rootfolder():
url_radarr_api_rootfolder = f"{url_api_radarr()}rootfolder?apikey={apikey_radarr}"
try:
rootfolder = requests.get(url_radarr_api_rootfolder, timeout=int(settings.radarr.http_timeout), verify=False, headers=headers)
rootfolder = requests.get(url_radarr_api_rootfolder, timeout=int(settings.radarr.http_timeout), verify=False, headers=HEADERS)
except requests.exceptions.ConnectionError:
logging.exception("BAZARR Error trying to get rootfolder from Radarr. Connection Error.")
return []

View File

@ -5,7 +5,7 @@ import logging
from app.config import settings
from radarr.info import get_radarr_info, url_api_radarr
from constants import headers
from constants import HEADERS
def get_profile_list():
@ -16,7 +16,7 @@ def get_profile_list():
f"apikey={apikey_radarr}")
try:
profiles_json = requests.get(url_radarr_api_movies, timeout=int(settings.radarr.http_timeout), verify=False, headers=headers)
profiles_json = requests.get(url_radarr_api_movies, timeout=int(settings.radarr.http_timeout), verify=False, headers=HEADERS)
except requests.exceptions.ConnectionError:
logging.exception("BAZARR Error trying to get profiles from Radarr. Connection Error.")
except requests.exceptions.Timeout:
@ -45,7 +45,7 @@ def get_tags():
url_radarr_api_series = f"{url_api_radarr()}tag?apikey={apikey_radarr}"
try:
tagsDict = requests.get(url_radarr_api_series, timeout=int(settings.radarr.http_timeout), verify=False, headers=headers)
tagsDict = requests.get(url_radarr_api_series, timeout=int(settings.radarr.http_timeout), verify=False, headers=HEADERS)
except requests.exceptions.ConnectionError:
logging.exception("BAZARR Error trying to get tags from Radarr. Connection Error.")
return []
@ -69,7 +69,7 @@ def get_movies_from_radarr_api(apikey_radarr, radarr_id=None):
url_radarr_api_movies = f'{url_api_radarr()}movie{f"/{radarr_id}" if radarr_id else ""}?apikey={apikey_radarr}'
try:
r = requests.get(url_radarr_api_movies, timeout=int(settings.radarr.http_timeout), verify=False, headers=headers)
r = requests.get(url_radarr_api_movies, timeout=int(settings.radarr.http_timeout), verify=False, headers=HEADERS)
if r.status_code == 404:
return
r.raise_for_status()
@ -100,7 +100,7 @@ def get_history_from_radarr_api(apikey_radarr, movie_id):
try:
r = requests.get(url_radarr_api_history, timeout=int(settings.sonarr.http_timeout), verify=False,
headers=headers)
headers=HEADERS)
r.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception("BAZARR Error trying to get history from Radarr. Http error.")

View File

@ -5,7 +5,7 @@ import logging
from app.config import settings
from sonarr.info import url_api_sonarr
from constants import headers
from constants import HEADERS
def browse_sonarr_filesystem(path='#'):
@ -15,7 +15,7 @@ def browse_sonarr_filesystem(path='#'):
f"includeFiles=false&apikey={settings.sonarr.apikey}")
try:
r = requests.get(url_sonarr_api_filesystem, timeout=int(settings.sonarr.http_timeout), verify=False,
headers=headers)
headers=HEADERS)
r.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception("BAZARR Error trying to get series from Sonarr. Http error.")

View File

@ -8,7 +8,7 @@ from requests.exceptions import JSONDecodeError
from dogpile.cache import make_region
from app.config import settings, empty_values
from constants import headers
from constants import HEADERS
region = make_region().configure('dogpile.cache.memory')
@ -30,7 +30,7 @@ class GetSonarrInfo:
try:
sv = f"{url_sonarr()}/api/system/status?apikey={settings.sonarr.apikey}"
sonarr_json = requests.get(sv, timeout=int(settings.sonarr.http_timeout), verify=False,
headers=headers).json()
headers=HEADERS).json()
if 'version' in sonarr_json:
sonarr_version = sonarr_json['version']
else:
@ -39,7 +39,7 @@ class GetSonarrInfo:
try:
sv = f"{url_sonarr()}/api/v3/system/status?apikey={settings.sonarr.apikey}"
sonarr_version = requests.get(sv, timeout=int(settings.sonarr.http_timeout), verify=False,
headers=headers).json()['version']
headers=HEADERS).json()['version']
except JSONDecodeError:
logging.debug('BAZARR cannot get Sonarr version')
sonarr_version = 'unknown'

View File

@ -5,7 +5,7 @@ import requests
from app.config import settings
from sonarr.info import url_api_sonarr
from constants import headers
from constants import HEADERS
def notify_sonarr(sonarr_series_id):
@ -15,6 +15,6 @@ def notify_sonarr(sonarr_series_id):
'name': 'RescanSeries',
'seriesId': int(sonarr_series_id)
}
requests.post(url, json=data, timeout=int(settings.sonarr.http_timeout), verify=False, headers=headers)
requests.post(url, json=data, timeout=int(settings.sonarr.http_timeout), verify=False, headers=HEADERS)
except Exception:
logging.exception('BAZARR cannot notify Sonarr')

View File

@ -8,7 +8,7 @@ from app.config import settings
from app.database import TableShowsRootfolder, TableShows, database, insert, update, delete, select
from utilities.path_mappings import path_mappings
from sonarr.info import url_api_sonarr
from constants import headers
from constants import HEADERS
def get_sonarr_rootfolder():
@ -19,7 +19,7 @@ def get_sonarr_rootfolder():
url_sonarr_api_rootfolder = f"{url_api_sonarr()}rootfolder?apikey={apikey_sonarr}"
try:
rootfolder = requests.get(url_sonarr_api_rootfolder, timeout=int(settings.sonarr.http_timeout), verify=False, headers=headers)
rootfolder = requests.get(url_sonarr_api_rootfolder, timeout=int(settings.sonarr.http_timeout), verify=False, headers=HEADERS)
except requests.exceptions.ConnectionError:
logging.exception("BAZARR Error trying to get rootfolder from Sonarr. Connection Error.")
return []

View File

@ -5,7 +5,7 @@ import logging
from app.config import settings
from sonarr.info import get_sonarr_info, url_api_sonarr
from constants import headers
from constants import HEADERS
def get_profile_list():
@ -23,7 +23,7 @@ def get_profile_list():
try:
profiles_json = requests.get(url_sonarr_api_series, timeout=int(settings.sonarr.http_timeout), verify=False,
headers=headers)
headers=HEADERS)
except requests.exceptions.ConnectionError:
logging.exception("BAZARR Error trying to get profiles from Sonarr. Connection Error.")
return None
@ -53,7 +53,7 @@ def get_tags():
url_sonarr_api_series = f"{url_api_sonarr()}tag?apikey={apikey_sonarr}"
try:
tagsDict = requests.get(url_sonarr_api_series, timeout=int(settings.sonarr.http_timeout), verify=False, headers=headers)
tagsDict = requests.get(url_sonarr_api_series, timeout=int(settings.sonarr.http_timeout), verify=False, headers=HEADERS)
except requests.exceptions.ConnectionError:
logging.exception("BAZARR Error trying to get tags from Sonarr. Connection Error.")
return []
@ -71,7 +71,7 @@ def get_series_from_sonarr_api(apikey_sonarr, sonarr_series_id=None):
url_sonarr_api_series = (f"{url_api_sonarr()}series/{sonarr_series_id if sonarr_series_id else ''}?"
f"apikey={apikey_sonarr}")
try:
r = requests.get(url_sonarr_api_series, timeout=int(settings.sonarr.http_timeout), verify=False, headers=headers)
r = requests.get(url_sonarr_api_series, timeout=int(settings.sonarr.http_timeout), verify=False, headers=HEADERS)
r.raise_for_status()
except requests.exceptions.HTTPError as e:
if e.response.status_code:
@ -110,7 +110,7 @@ def get_episodes_from_sonarr_api(apikey_sonarr, series_id=None, episode_id=None)
return
try:
r = requests.get(url_sonarr_api_episode, timeout=int(settings.sonarr.http_timeout), verify=False, headers=headers)
r = requests.get(url_sonarr_api_episode, timeout=int(settings.sonarr.http_timeout), verify=False, headers=HEADERS)
r.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception("BAZARR Error trying to get episodes from Sonarr. Http error.")
@ -144,7 +144,7 @@ def get_episodesFiles_from_sonarr_api(apikey_sonarr, series_id=None, episode_fil
try:
r = requests.get(url_sonarr_api_episodeFiles, timeout=int(settings.sonarr.http_timeout), verify=False,
headers=headers)
headers=HEADERS)
r.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception("BAZARR Error trying to get episodeFiles from Sonarr. Http error.")
@ -173,7 +173,7 @@ def get_history_from_sonarr_api(apikey_sonarr, episode_id):
try:
r = requests.get(url_sonarr_api_history, timeout=int(settings.sonarr.http_timeout), verify=False,
headers=headers)
headers=HEADERS)
r.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception("BAZARR Error trying to get history from Sonarr. Http error.")

View File

@ -9,8 +9,8 @@ from subliminal_patch import core
from subzero.language import Language
from charset_normalizer import detect
from constants import MAXIMUM_SUBTITLE_SIZE
from app.config import settings
from constants import hi_regex
from utilities.path_mappings import path_mappings
@ -68,7 +68,7 @@ def guess_external_subtitles(dest_folder, subtitles, media_type, previously_inde
forced = True if os.path.splitext(os.path.splitext(subtitle)[0])[1] == '.forced' else False
# to improve performance, skip detection of files larger that 1M
if os.path.getsize(subtitle_path) > 1 * 1024 * 1024:
if os.path.getsize(subtitle_path) > MAXIMUM_SUBTITLE_SIZE:
logging.debug(f"BAZARR subtitles file is too large to be text based. Skipping this file: "
f"{subtitle_path}")
continue
@ -119,7 +119,7 @@ def guess_external_subtitles(dest_folder, subtitles, media_type, previously_inde
# check if file exist:
if os.path.exists(subtitle_path) and os.path.splitext(subtitle_path)[1] in core.SUBTITLE_EXTENSIONS:
# to improve performance, skip detection of files larger that 1M
if os.path.getsize(subtitle_path) > 1 * 1024 * 1024:
if os.path.getsize(subtitle_path) > MAXIMUM_SUBTITLE_SIZE:
logging.debug(f"BAZARR subtitles file is too large to be text based. Skipping this file: "
f"{subtitle_path}")
continue
@ -136,6 +136,6 @@ def guess_external_subtitles(dest_folder, subtitles, media_type, previously_inde
continue
text = text.decode(encoding)
if bool(re.search(hi_regex, text)):
if bool(re.search(core.HI_REGEX, text)):
subtitles[subtitle] = Language.rebuild(subtitles[subtitle], forced=False, hi=True)
return subtitles

View File

@ -18,7 +18,7 @@ from app.config import get_scores, settings, get_array_from
from utilities.helper import get_target_folder, force_unicode
from app.database import get_profiles_list
from .pool import update_pools, _get_pool, _init_pool
from .pool import update_pools, _get_pool
from .utils import get_video, _get_lang_obj, _get_scores, _set_forced_providers
from .processing import process_subtitle
@ -46,21 +46,7 @@ def manual_search(path, profile_id, providers, sceneName, title, media_type):
try:
if providers:
subtitles = list_all_subtitles([video], language_set, pool)
if 'subscene' in providers:
s_pool = _init_pool("movie", profile_id, {"subscene"})
subscene_language_set = set()
for language in language_set:
if language.forced:
subscene_language_set.add(language)
if len(subscene_language_set):
s_pool.provider_configs.update({"subscene": {"only_foreign": True}})
subtitles_subscene = list_all_subtitles([video], subscene_language_set, s_pool)
s_pool.provider_configs.update({"subscene": {"only_foreign": False}})
subtitles[video] += subtitles_subscene[video]
else:
subtitles = []
logging.info("BAZARR All providers are throttled")
return 'All providers are throttled'
except Exception:

View File

@ -33,9 +33,9 @@ def sync_subtitles(video_path, srt_path, srt_lang, forced, percent_score, sonarr
'max_offset_seconds': str(settings.subsync.max_offset_seconds),
'no_fix_framerate': settings.subsync.no_fix_framerate,
'gss': settings.subsync.gss,
'reference': None, # means choose automatically within video file
'sonarr_series_id': sonarr_series_id,
'sonarr_episode_id': sonarr_episode_id,
'reference': None, # means choose automatically within video file
'sonarr_series_id': sonarr_series_id,
'sonarr_episode_id': sonarr_episode_id,
'radarr_id': radarr_id,
}
subsync.sync(**sync_kwargs)

View File

@ -30,8 +30,8 @@ class SubSyncer:
self.vad = 'subs_then_webrtc'
self.log_dir_path = os.path.join(args.config_dir, 'log')
def sync(self, video_path, srt_path, srt_lang,
max_offset_seconds, no_fix_framerate, gss, reference=None,
def sync(self, video_path, srt_path, srt_lang,
max_offset_seconds, no_fix_framerate, gss, reference=None,
sonarr_series_id=None, sonarr_episode_id=None, radarr_id=None):
self.reference = video_path
self.srtin = srt_path

View File

@ -97,7 +97,6 @@ def _set_forced_providers(pool, also_forced=False, forced_required=False):
pool.provider_configs.update(
{
"podnapisi": {'also_foreign': also_forced, "only_foreign": forced_required},
"subscene": {"only_foreign": forced_required},
"opensubtitles": {'also_foreign': also_forced, "only_foreign": forced_required}
}
)

View File

@ -3,33 +3,41 @@
# only methods can be specified here that do not cause other moudules to be loaded
# for other methods that use settings, etc., use utilities/helper.py
import contextlib
import logging
import os
from pathlib import Path
from literals import *
from literals import ENV_BAZARR_ROOT_DIR, DIR_LOG, ENV_STOPFILE, ENV_RESTARTFILE, EXIT_NORMAL, FILE_LOG
def get_bazarr_dir(sub_dir):
path = os.path.join(os.environ[ENV_BAZARR_ROOT_DIR], sub_dir)
return path
def make_bazarr_dir(sub_dir):
path = get_bazarr_dir(sub_dir)
if not os.path.exists(path):
os.mkdir(path)
def get_log_file_path():
path = os.path.join(get_bazarr_dir(DIR_LOG), FILE_LOG)
return path
def get_stop_file_path():
return os.environ[ENV_STOPFILE]
def get_restart_file_path():
return os.environ[ENV_RESTARTFILE]
def stop_bazarr(status_code=EXIT_NORMAL, exit_main=True):
try:
with open(get_stop_file_path(),'w', encoding='UTF-8') as file:
with open(get_stop_file_path(), 'w', encoding='UTF-8') as file:
# write out status code for final exit
file.write(f'{status_code}\n')
file.close()
@ -39,11 +47,15 @@ def stop_bazarr(status_code=EXIT_NORMAL, exit_main=True):
if exit_main:
raise SystemExit(status_code)
def restart_bazarr():
try:
Path(get_restart_file_path()).touch()
except Exception as e:
logging.error(f'BAZARR Cannot create restart file: {repr(e)}')
logging.info('Bazarr is being restarted...')
raise SystemExit(EXIT_NORMAL)
# Wrap the SystemExit for a graceful restart. The SystemExit still performs the cleanup but the traceback is omitted
# preventing to throw the exception to the caller but still terminates the Python process with the desired Exit Code
with contextlib.suppress(SystemExit):
raise SystemExit(EXIT_NORMAL)

View File

@ -9,4 +9,4 @@ From newest to oldest:
{{#each commits}}
- {{subject}}{{#if href}} [{{shorthash}}]({{href}}){{/if}}
{{/each}}
{{/each}}
{{/each}}

View File

@ -9,4 +9,4 @@ From newest to oldest:
{{#each commits}}
- {{subject}}{{#if href}} [{{shorthash}}]({{href}}){{/if}}
{{/each}}
{{/each}}
{{/each}}

View File

@ -15,5 +15,4 @@ deathbycaptcha # unknown version, only found on gist
git+https://github.com/pannal/libfilebot#egg=libfilebot
git+https://github.com/RobinDavid/pyADS.git@28a2f6dbfb357f85b2c2f49add770b336e88840d#egg=pyads
py7zr==0.7.0 # modified to prevent importing of modules that can't be vendored
subscene-api==1.0.0 # modified specificaly for Bazarr
subliminal==2.1.0 # modified specifically for Bazarr

View File

@ -1,92 +0,0 @@
# coding=utf-8
from __future__ import absolute_import
from babelfish import LanguageReverseConverter
from subliminal.exceptions import ConfigurationError
from subzero.language import Language
# alpha3 codes extracted from `https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes`
# Subscene language list extracted from it's upload form
from_subscene = {
'Farsi/Persian': 'fas', 'Greek': 'ell', 'Greenlandic': 'kal',
'Malay': 'msa', 'Pashto': 'pus', 'Punjabi': 'pan', 'Swahili': 'swa'
}
from_subscene_with_country = {
'Brazillian Portuguese': ('por', 'BR')
}
to_subscene_with_country = {val: key for key, val in from_subscene_with_country.items()}
to_subscene = {v: k for k, v in from_subscene.items()}
exact_languages_alpha3 = [
'ara', 'aze', 'bel', 'ben', 'bos', 'bul', 'cat', 'ces', 'dan', 'deu',
'eng', 'epo', 'est', 'eus', 'fin', 'fra', 'heb', 'hin', 'hrv', 'hun',
'hye', 'ind', 'isl', 'ita', 'jpn', 'kat', 'kor', 'kur', 'lav', 'lit',
'mal', 'mkd', 'mni', 'mon', 'mya', 'nld', 'nor', 'pol', 'por', 'ron',
'rus', 'sin', 'slk', 'slv', 'som', 'spa', 'sqi', 'srp', 'sun', 'swe',
'tam', 'tel', 'tgl', 'tha', 'tur', 'ukr', 'urd', 'vie', 'yor'
]
language_ids = {
'ara': 2, 'dan': 10, 'nld': 11, 'eng': 13, 'fas': 46, 'fin': 17,
'fra': 18, 'heb': 22, 'ind': 44, 'ita': 26, 'msa': 50, 'nor': 30,
'ron': 33, 'spa': 38, 'swe': 39, 'vie': 45, 'sqi': 1, 'hye': 73,
'aze': 55, 'eus': 74, 'bel': 68, 'ben': 54, 'bos': 60, 'bul': 5,
'mya': 61, 'cat': 49, 'hrv': 8, 'ces': 9, 'epo': 47, 'est': 16,
'kat': 62, 'deu': 19, 'ell': 21, 'kal': 57, 'hin': 51, 'hun': 23,
'isl': 25, 'jpn': 27, 'kor': 28, 'kur': 52, 'lav': 29, 'lit': 43,
'mkd': 48, 'mal': 64, 'mni': 65, 'mon': 72, 'pus': 67, 'pol': 31,
'por': 32, 'pan': 66, 'rus': 34, 'srp': 35, 'sin': 58, 'slk': 36,
'slv': 37, 'som': 70, 'tgl': 53, 'tam': 59, 'tel': 63, 'tha': 40,
'tur': 41, 'ukr': 56, 'urd': 42, 'yor': 71, 'pt-BR': 4
}
# TODO: specify codes for unspecified_languages
unspecified_languages = [
'Big 5 code', 'Bulgarian/ English',
'Chinese BG code', 'Dutch/ English', 'English/ German',
'Hungarian/ English', 'Rohingya'
]
supported_languages = {Language(l) for l in exact_languages_alpha3}
alpha3_of_code = {l.name: l.alpha3 for l in supported_languages}
supported_languages.update({Language(l) for l in to_subscene})
supported_languages.update({Language(lang, cr) for lang, cr in to_subscene_with_country})
class SubsceneConverter(LanguageReverseConverter):
codes = {l.name for l in supported_languages}
def convert(self, alpha3, country=None, script=None):
if alpha3 in exact_languages_alpha3:
return Language(alpha3).name
if alpha3 in to_subscene:
return to_subscene[alpha3]
if (alpha3, country) in to_subscene_with_country:
return to_subscene_with_country[(alpha3, country)]
raise ConfigurationError('Unsupported language for subscene: %s, %s, %s' % (alpha3, country, script))
def reverse(self, code):
if code in from_subscene_with_country:
return from_subscene_with_country[code]
if code in from_subscene:
return (from_subscene[code],)
if code in alpha3_of_code:
return (alpha3_of_code[code],)
if code in unspecified_languages:
raise NotImplementedError("currently this language is unspecified: %s" % code)
raise ConfigurationError('Unsupported language code for subscene: %s' % code)

View File

@ -49,6 +49,8 @@ SUBTITLE_EXTENSIONS = ('.srt', '.sub', '.smi', '.txt', '.ssa', '.ass', '.mpl', '
_POOL_LIFETIME = datetime.timedelta(hours=12)
HI_REGEX = re.compile(r'[*¶♫♪].{3,}[*¶♫♪]|[\[\(\{].{3,}[\]\)\}](?<!{\\an\d})')
def remove_crap_from_fn(fn):
# in case of the second regex part, the legit release group name will be in group(2), if it's followed by [string]
@ -1191,7 +1193,7 @@ def save_subtitles(file_path, subtitles, single=False, directory=None, chmod=Non
must_remove_hi = 'remove_HI' in subtitle.mods
# check content
if subtitle.content is None:
if subtitle.content is None or subtitle.text is None:
logger.error('Skipping subtitle %r: no content', subtitle)
continue
@ -1201,6 +1203,8 @@ def save_subtitles(file_path, subtitles, single=False, directory=None, chmod=Non
continue
# create subtitle path
if subtitle.text and bool(re.search(HI_REGEX, subtitle.text)):
subtitle.language.hi = True
subtitle_path = get_subtitle_path(file_path, None if single else subtitle.language,
forced_tag=subtitle.language.forced,
hi_tag=False if must_remove_hi else subtitle.language.hi, tags=tags)

View File

@ -46,10 +46,11 @@ class AnimeToshoSubtitle(Subtitle):
"""AnimeTosho.org Subtitle."""
provider_name = 'animetosho'
def __init__(self, language, download_link, meta):
def __init__(self, language, download_link, meta, release_info):
super(AnimeToshoSubtitle, self).__init__(language, page_link=download_link)
self.meta = meta
self.download_link = download_link
self.release_info = release_info
@property
def id(self):
@ -88,7 +89,9 @@ class AnimeToshoProvider(Provider, ProviderSubtitleArchiveMixin):
def list_subtitles(self, video, languages):
if not video.series_anidb_episode_id:
raise ProviderError("Video does not have an AnimeTosho Episode ID!")
logger.debug('Skipping video %r. It is not an anime or the anidb_episode_id could not be identified', video)
return []
return [s for s in self._get_series(video.series_anidb_episode_id) if s.language in languages]
@ -150,6 +153,7 @@ class AnimeToshoProvider(Provider, ProviderSubtitleArchiveMixin):
lang,
storage_download_url + '{}/{}.xz'.format(hex_id, subtitle_file['id']),
meta=file,
release_info=entry.get('title'),
)
logger.debug('Found subtitle %r', subtitle)

View File

@ -382,7 +382,7 @@ def _clean_ass_subtitles(path, output_path):
logger.debug("Cleaned lines: %d", abs(len(lines) - len(clean_lines)))
with open(output_path, "w") as f:
with open(output_path, "w", encoding="utf-8", errors="ignore") as f:
f.writelines(clean_lines)
logger.debug("Lines written to output path: %s", output_path)

View File

@ -126,7 +126,7 @@ class SubdivxSubtitlesProvider(Provider):
titles = [video.series if episode else video.title]
try:
titles.extend(video.alternative_titles)
titles.extend(video.alternative_series if episode else video.alternative_titles)
except:
pass
else:
@ -138,6 +138,7 @@ class SubdivxSubtitlesProvider(Provider):
# TODO: cache pack queries (TV SHOW S01).
# Too many redundant server calls.
for title in titles:
title = _series_sanitizer(title)
for query in (
f"{title} S{video.season:02}E{video.episode:02}",
f"{title} S{video.season:02}",
@ -297,20 +298,31 @@ def _check_episode(video, title):
) and season_num == video.season
series_title = _SERIES_RE.sub("", title).strip()
series_title = _series_sanitizer(series_title)
distance = abs(len(series_title) - len(video.series))
for video_series_title in [video.series] + video.alternative_series:
video_series_title = _series_sanitizer(video_series_title)
distance = abs(len(series_title) - len(video_series_title))
series_matched = distance < 4 and ep_matches
series_matched = (distance < 4 or video_series_title in series_title) and ep_matches
logger.debug(
"Series matched? %s [%s -> %s] [title distance: %d]",
series_matched,
video,
title,
distance,
)
logger.debug(
"Series matched? %s [%s -> %s] [title distance: %d]",
series_matched,
video_series_title,
series_title,
distance,
)
return series_matched
if series_matched:
return True
return False
def _series_sanitizer(title):
title = re.sub(r"\'|\.+", '', title) # remove single quote and dot
title = re.sub(r"\W+", ' ', title) # replace by a space anything other than a letter, digit or underscore
return re.sub(r"([A-Z])\s(?=[A-Z]\b)", '', title).strip() # Marvels Agent of S.H.I.E.L.D
def _check_movie(video, title):

View File

@ -1,366 +0,0 @@
# coding=utf-8
import io
import logging
import os
import time
import traceback
from urllib import parse
import requests
import inflect
import re
import json
import html
import zipfile
import rarfile
from babelfish import language_converters
from guessit import guessit
from dogpile.cache.api import NO_VALUE
from requests.exceptions import RequestException
from subliminal import Episode, ProviderError
from subliminal.video import Episode, Movie
from subliminal.exceptions import ConfigurationError, ServiceUnavailable
from subliminal.utils import sanitize_release_group
from subliminal.cache import region
from subliminal_patch.http import RetryingCFSession
from subliminal_patch.providers import Provider, reinitialize_on_error
from subliminal_patch.providers.mixins import ProviderSubtitleArchiveMixin
from subliminal_patch.subtitle import Subtitle, guess_matches
from subliminal_patch.converters.subscene import language_ids, supported_languages
from subscene_api.subscene import search, SearchTypes, Subtitle as APISubtitle, SITE_DOMAIN
from subzero.language import Language
p = inflect.engine()
language_converters.register('subscene = subliminal_patch.converters.subscene:SubsceneConverter')
logger = logging.getLogger(__name__)
class SubsceneSubtitle(Subtitle):
provider_name = 'subscene'
hearing_impaired_verifiable = True
is_pack = False
page_link = None
season = None
episode = None
releases = None
def __init__(self, language, release_info, hearing_impaired=False, page_link=None, encoding=None, mods=None,
asked_for_release_group=None, asked_for_episode=None):
super(SubsceneSubtitle, self).__init__(language, hearing_impaired=hearing_impaired, page_link=page_link,
encoding=encoding, mods=mods)
self.release_info = self.releases = release_info
self.asked_for_episode = asked_for_episode
self.asked_for_release_group = asked_for_release_group
self.season = None
self.episode = None
@classmethod
def from_api(cls, s):
return cls(Language.fromsubscene(s.language.strip()), s.title, hearing_impaired=s.hearing_impaired,
page_link=s.url)
@property
def id(self):
return self.page_link
@property
def numeric_id(self):
return self.page_link.split("/")[-1]
def get_matches(self, video):
matches = set()
if self.release_info.strip() == get_video_filename(video):
logger.debug("Using hash match as the release name is the same")
matches |= {"hash"}
# episode
if isinstance(video, Episode):
guess = guessit(self.release_info, {'type': 'episode'})
self.season = guess.get("season")
self.episode = guess.get("episode")
matches |= guess_matches(video, guess)
if "season" in matches and "episode" not in guess:
# pack
matches.add("episode")
logger.debug("%r is a pack", self)
self.is_pack = True
if "title" in guess and "year" in matches:
if video.series in guess['title']:
matches.add("series")
# movie
else:
guess = guessit(self.release_info, {'type': 'movie'})
matches |= guess_matches(video, guess)
if video.release_group and "release_group" not in matches and "release_group" in guess:
if sanitize_release_group(video.release_group) in sanitize_release_group(guess["release_group"]):
matches.add("release_group")
self.matches = matches
return matches
def get_download_link(self, session):
return APISubtitle.get_zipped_url(self.page_link, session)
def get_video_filename(video):
return os.path.splitext(os.path.basename(video.original_name))[0]
class SubsceneProvider(Provider, ProviderSubtitleArchiveMixin):
"""
This currently only searches for the filename on SubScene. It doesn't open every found subtitle page to avoid
massive hammering, thus it can't determine whether a subtitle is only-foreign or not.
"""
subtitle_class = SubsceneSubtitle
languages = supported_languages
languages.update(set(Language.rebuild(l, forced=True) for l in languages))
languages.update(set(Language.rebuild(l, hi=True) for l in languages))
video_types = (Episode, Movie)
session = None
skip_wrong_fps = False
hearing_impaired_verifiable = True
only_foreign = False
username = None
password = None
search_throttle = 8 # seconds
def __init__(self, only_foreign=False, username=None, password=None):
if not all((username, password)):
raise ConfigurationError('Username and password must be specified')
self.only_foreign = only_foreign
self.username = username
self.password = password
def initialize(self):
logger.info("Creating session")
self.session = RetryingCFSession()
prev_cookies = region.get("subscene_cookies2")
if prev_cookies != NO_VALUE:
logger.debug("Re-using old subscene cookies: %r", prev_cookies)
self.session.cookies.update(prev_cookies)
else:
logger.debug("Logging in")
self.login()
def login(self):
r = self.session.get("https://subscene.com/account/login")
if "Server Error" in r.text:
logger.error("Login unavailable; Maintenance?")
raise ServiceUnavailable("Login unavailable; Maintenance?")
match = re.search(r"<script id='modelJson' type='application/json'>\s*(.+)\s*</script>", r.text)
if match:
h = html
data = json.loads(h.unescape(match.group(1)))
login_url = parse.urljoin(data["siteUrl"], data["loginUrl"])
time.sleep(1.0)
r = self.session.post(login_url,
{
"username": self.username,
"password": self.password,
data["antiForgery"]["name"]: data["antiForgery"]["value"]
})
pep_content = re.search(r"<form method=\"post\" action=\"https://subscene\.com/\">"
r".+name=\"id_token\".+?value=\"(?P<id_token>.+?)\".*?"
r"access_token\".+?value=\"(?P<access_token>.+?)\".+?"
r"token_type.+?value=\"(?P<token_type>.+?)\".+?"
r"expires_in.+?value=\"(?P<expires_in>.+?)\".+?"
r"scope.+?value=\"(?P<scope>.+?)\".+?"
r"state.+?value=\"(?P<state>.+?)\".+?"
r"session_state.+?value=\"(?P<session_state>.+?)\"",
r.text, re.MULTILINE | re.DOTALL)
if pep_content:
r = self.session.post(SITE_DOMAIN, pep_content.groupdict())
try:
r.raise_for_status()
except Exception:
raise ProviderError("Something went wrong when trying to log in: %s", traceback.format_exc())
else:
cj = self.session.cookies.copy()
store_cks = ("scene", "idsrv", "idsrv.xsrf", "idsvr.clients", "idsvr.session", "idsvr.username")
for cn in self.session.cookies.keys():
if cn not in store_cks:
del cj[cn]
logger.debug("Storing cookies: %r", cj)
region.set("subscene_cookies2", cj)
return
raise ProviderError("Something went wrong when trying to log in #1")
def terminate(self):
logger.info("Closing session")
self.session.close()
def _create_filters(self, languages):
self.filters = dict(HearingImpaired="2")
acc_filters = self.filters.copy()
if self.only_foreign:
self.filters["ForeignOnly"] = "True"
acc_filters["ForeignOnly"] = self.filters["ForeignOnly"].lower()
logger.info("Only searching for foreign/forced subtitles")
selected_ids = []
for l in languages:
lid = language_ids.get(l.basename, language_ids.get(l.alpha3, None))
if lid:
selected_ids.append(str(lid))
acc_filters["SelectedIds"] = selected_ids
self.filters["LanguageFilter"] = ",".join(acc_filters["SelectedIds"])
last_filters = region.get("subscene_filters")
if last_filters != acc_filters:
region.set("subscene_filters", acc_filters)
logger.debug("Setting account filters to %r", acc_filters)
self.session.post("https://u.subscene.com/filter", acc_filters, allow_redirects=False)
logger.debug("Filter created: '%s'" % self.filters)
def _enable_filters(self):
self.session.cookies.update(self.filters)
logger.debug("Filters applied")
def list_subtitles(self, video, languages):
if not video.original_name:
logger.info("Skipping search because we don't know the original release name")
return []
self._create_filters(languages)
self._enable_filters()
if isinstance(video, Episode):
international_titles = list(set([video.series] + video.alternative_series[:1]))
subtitles = [s for s in self.query(video, international_titles) if s.language in languages]
if not len(subtitles):
us_titles = [x + ' (US)' for x in international_titles]
subtitles = [s for s in self.query(video, us_titles) if s.language in languages]
return subtitles
else:
titles = list(set([video.title] + video.alternative_titles[:1]))
return [s for s in self.query(video, titles) if s.language in languages]
def download_subtitle(self, subtitle):
if subtitle.pack_data:
logger.info("Using previously downloaded pack data")
if rarfile.is_rarfile(io.BytesIO(subtitle.pack_data)):
logger.debug('Identified rar archive')
archive = rarfile.RarFile(io.BytesIO(subtitle.pack_data))
elif zipfile.is_zipfile(io.BytesIO(subtitle.pack_data)):
logger.debug('Identified zip archive')
archive = zipfile.ZipFile(io.BytesIO(subtitle.pack_data))
else:
logger.error('Unsupported compressed format')
return
subtitle.pack_data = None
try:
subtitle.content = self.get_subtitle_from_archive(subtitle, archive)
return
except ProviderError:
pass
# open the archive
r = self.session.get(subtitle.get_download_link(self.session), timeout=10)
r.raise_for_status()
archive_stream = io.BytesIO(r.content)
if rarfile.is_rarfile(archive_stream):
logger.debug('Identified rar archive')
archive = rarfile.RarFile(archive_stream)
elif zipfile.is_zipfile(archive_stream):
logger.debug('Identified zip archive')
archive = zipfile.ZipFile(archive_stream)
else:
logger.error('Unsupported compressed format')
return
subtitle.content = self.get_subtitle_from_archive(subtitle, archive)
# store archive as pack_data for later caching
subtitle.pack_data = r.content
def parse_results(self, video, film):
subtitles = []
for s in film.subtitles:
try:
subtitle = SubsceneSubtitle.from_api(s)
except NotImplementedError as e:
logger.info(e)
continue
subtitle.asked_for_release_group = video.release_group
if isinstance(video, Episode):
subtitle.asked_for_episode = video.episode
if self.only_foreign:
subtitle.language = Language.rebuild(subtitle.language, forced=True)
# set subtitle language to hi if it's hearing_impaired
if subtitle.hearing_impaired:
subtitle.language = Language.rebuild(subtitle.language, hi=True)
subtitles.append(subtitle)
logger.debug('Found subtitle %r', subtitle)
return subtitles
def do_search(self, *args, **kwargs):
try:
return search(*args, **kwargs)
except requests.HTTPError:
region.delete("subscene_cookies2")
raise
@reinitialize_on_error((RequestException,), attempts=1)
def query(self, video, titles):
subtitles = []
if isinstance(video, Episode):
more_than_one = len(titles) > 1
for series in titles:
term = u"%s - %s Season" % (series, p.number_to_words("%sth" % video.season).capitalize())
logger.debug('Searching with series and season: %s', term)
film = self.do_search(term, session=self.session, release=False, throttle=self.search_throttle,
limit_to=SearchTypes.TvSerie)
if not film and video.season == 1:
logger.debug('Searching with series name: %s', series)
film = self.do_search(series, session=self.session, release=False, throttle=self.search_throttle,
limit_to=SearchTypes.TvSerie)
if film and film.subtitles:
logger.debug('Searching found: %s', len(film.subtitles))
subtitles += self.parse_results(video, film)
else:
logger.debug('No results found')
if more_than_one:
time.sleep(self.search_throttle)
else:
more_than_one = len(titles) > 1
for title in titles:
logger.debug('Searching for movie results: %r', title)
film = self.do_search(title, year=video.year, session=self.session, limit_to=None, release=False,
throttle=self.search_throttle)
if film and film.subtitles:
subtitles += self.parse_results(video, film)
if more_than_one:
time.sleep(self.search_throttle)
logger.info("%s subtitles found" % len(subtitles))
return subtitles

View File

@ -1,410 +0,0 @@
# -*- coding: utf-8 -*-
from difflib import SequenceMatcher
import functools
import logging
import re
import time
import urllib.parse
from bs4 import BeautifulSoup as bso
import cloudscraper
from guessit import guessit
from requests import Session
from requests.exceptions import HTTPError
from subliminal.exceptions import ProviderError
from subliminal_patch.core import Episode
from subliminal_patch.core import Movie
from subliminal_patch.exceptions import APIThrottled
from subliminal_patch.providers import Provider
from subliminal_patch.providers.utils import get_archive_from_bytes
from subliminal_patch.providers.utils import get_subtitle_from_archive
from subliminal_patch.providers.utils import update_matches
from subliminal_patch.subtitle import Subtitle
from subzero.language import Language
logger = logging.getLogger(__name__)
class SubsceneSubtitle(Subtitle):
provider_name = "subscene_cloudscraper"
hash_verifiable = False
def __init__(self, language, page_link, release_info, episode_number=None):
super().__init__(language, page_link=page_link)
self.release_info = release_info
self.episode_number = episode_number
self.episode_title = None
self._matches = set(
("title", "year")
if episode_number is None
else ("title", "series", "year", "season", "episode")
)
def get_matches(self, video):
update_matches(self._matches, video, self.release_info)
return self._matches
@property
def id(self):
return self.page_link
_BASE_URL = "https://subscene.com"
# TODO: add more seasons and languages
_SEASONS = (
"First",
"Second",
"Third",
"Fourth",
"Fifth",
"Sixth",
"Seventh",
"Eighth",
"Ninth",
"Tenth",
"Eleventh",
"Twelfth",
"Thirdteenth",
"Fourthteenth",
"Fifteenth",
"Sixteenth",
"Seventeenth",
"Eightheenth",
"Nineteenth",
"Tweentieth",
)
_LANGUAGE_MAP = {
"english": "eng",
"farsi_persian": "per",
"arabic": "ara",
"spanish": "spa",
"portuguese": "por",
"italian": "ita",
"dutch": "dut",
"hebrew": "heb",
"indonesian": "ind",
"danish": "dan",
"norwegian": "nor",
"bengali": "ben",
"bulgarian": "bul",
"croatian": "hrv",
"swedish": "swe",
"vietnamese": "vie",
"czech": "cze",
"finnish": "fin",
"french": "fre",
"german": "ger",
"greek": "gre",
"hungarian": "hun",
"icelandic": "ice",
"japanese": "jpn",
"macedonian": "mac",
"malay": "may",
"polish": "pol",
"romanian": "rum",
"russian": "rus",
"serbian": "srp",
"thai": "tha",
"turkish": "tur",
}
class SubsceneProvider(Provider):
provider_name = "subscene_cloudscraper"
_movie_title_regex = re.compile(r"^(.+?)( \((\d{4})\))?$")
_tv_show_title_regex = re.compile(
r"^(.+?) [-\(]\s?(.*?) (season|series)\)?( \((\d{4})\))?$"
)
_supported_languages = {}
_supported_languages["brazillian-portuguese"] = Language("por", "BR")
for key, val in _LANGUAGE_MAP.items():
_supported_languages[key] = Language.fromalpha3b(val)
_supported_languages_reversed = {
val: key for key, val in _supported_languages.items()
}
languages = set(_supported_languages.values())
video_types = (Episode, Movie)
subtitle_class = SubsceneSubtitle
def initialize(self):
pass
def terminate(self):
pass
def _scraper_call(self, url, retry=7, method="GET", sleep=5, **kwargs):
last_exc = None
for n in range(retry):
# Creating an instance for every try in order to avoid dropped connections.
# This could probably be improved!
scraper = cloudscraper.create_scraper()
if method == "GET":
req = scraper.get(url, **kwargs)
elif method == "POST":
req = scraper.post(url, **kwargs)
else:
raise NotImplementedError(f"{method} not allowed")
try:
req.raise_for_status()
except HTTPError as error:
logger.debug(
"'%s' returned. Trying again [%d] in %s", error, n + 1, sleep
)
last_exc = error
time.sleep(sleep)
else:
return req
raise ProviderError("403 Retry count exceeded") from last_exc
def _gen_results(self, query):
url = (
f"{_BASE_URL}/subtitles/searchbytitle?query={urllib.parse.quote(query)}&l="
)
result = self._scraper_call(url, method="POST")
soup = bso(result.content, "html.parser")
for title in soup.select("li div[class='title'] a"):
yield title
def _search_movie(self, title, year):
title = title.lower()
year = str(year)
found_movie = None
results = []
for result in self._gen_results(title):
text = result.text.lower()
match = self._movie_title_regex.match(text)
if not match:
continue
match_title = match.group(1)
match_year = match.group(3)
if year == match_year:
results.append(
{
"href": result.get("href"),
"similarity": SequenceMatcher(None, title, match_title).ratio(),
}
)
if results:
results.sort(key=lambda x: x["similarity"], reverse=True)
found_movie = results[0]["href"]
logger.debug("Movie found: %s", results[0])
return found_movie
def _search_tv_show_season(self, title, season, year=None):
try:
season_str = _SEASONS[season - 1].lower()
except IndexError:
logger.debug("Season number not supported: %s", season)
return None
found_tv_show_season = None
results = []
for result in self._gen_results(title):
text = result.text.lower()
match = self._tv_show_title_regex.match(text)
if not match:
logger.debug("Series title not matched: %s", text)
continue
else:
logger.debug("Series title matched: %s", text)
match_title = match.group(1)
match_season = match.group(2)
# Match "complete series" titles as they usually contain season packs
if season_str == match_season or "complete" in match_season:
plus = 0.1 if year and str(year) in text else 0
results.append(
{
"href": result.get("href"),
"similarity": SequenceMatcher(None, title, match_title).ratio()
+ plus,
}
)
if results:
results.sort(key=lambda x: x["similarity"], reverse=True)
found_tv_show_season = results[0]["href"]
logger.debug("TV Show season found: %s", results[0])
return found_tv_show_season
def _find_movie_subtitles(self, path, language):
soup = self._get_subtitle_page_soup(path, language)
subtitles = []
for item in soup.select("tr"):
subtitle = _get_subtitle_from_item(item, language)
if subtitle is None:
continue
logger.debug("Found subtitle: %s", subtitle)
subtitles.append(subtitle)
return subtitles
def _find_episode_subtitles(
self, path, season, episode, language, episode_title=None
):
soup = self._get_subtitle_page_soup(path, language)
subtitles = []
for item in soup.select("tr"):
valid_item = None
clean_text = " ".join(item.text.split())
if not clean_text:
continue
# It will return list values
guess = _memoized_episode_guess(clean_text)
if "season" not in guess:
if "complete series" in clean_text.lower():
logger.debug("Complete series pack found: %s", clean_text)
guess["season"] = [season]
else:
logger.debug("Nothing guessed from release: %s", clean_text)
continue
if season in guess["season"] and episode in guess.get("episode", []):
logger.debug("Episode match found: %s - %s", guess, clean_text)
valid_item = item
elif season in guess["season"] and not "episode" in guess:
logger.debug("Season pack found: %s", clean_text)
valid_item = item
if valid_item is None:
continue
subtitle = _get_subtitle_from_item(item, language, episode)
if subtitle is None:
continue
subtitle.episode_title = episode_title
logger.debug("Found subtitle: %s", subtitle)
subtitles.append(subtitle)
return subtitles
def _get_subtitle_page_soup(self, path, language):
language_path = self._supported_languages_reversed[language]
result = self._scraper_call(f"{_BASE_URL}{path}/{language_path}")
return bso(result.content, "html.parser")
def list_subtitles(self, video, languages):
is_episode = isinstance(video, Episode)
if is_episode:
result = self._search_tv_show_season(video.series, video.season, video.year)
else:
result = self._search_movie(video.title, video.year)
if result is None:
logger.debug("No results")
return []
subtitles = []
for language in languages:
if is_episode:
subtitles.extend(
self._find_episode_subtitles(
result, video.season, video.episode, language, video.title
)
)
else:
subtitles.extend(self._find_movie_subtitles(result, language))
return subtitles
def download_subtitle(self, subtitle):
# TODO: add MustGetBlacklisted support
result = self._scraper_call(subtitle.page_link)
soup = bso(result.content, "html.parser")
try:
download_url = _BASE_URL + str(
soup.select_one("a[id='downloadButton']")["href"] # type: ignore
)
except (AttributeError, KeyError, TypeError):
raise APIThrottled(f"Couldn't get download url from {subtitle.page_link}")
downloaded = self._scraper_call(download_url)
archive = get_archive_from_bytes(downloaded.content)
if archive is None:
raise APIThrottled(f"Invalid archive: {subtitle.page_link}")
subtitle.content = get_subtitle_from_archive(
archive,
episode=subtitle.episode_number,
episode_title=subtitle.episode_title,
)
@functools.lru_cache(2048)
def _memoized_episode_guess(content):
# Use include to save time from unnecessary checks
return guessit(
content,
{
"type": "episode",
# Add codec keys to avoid matching x264, 5.1, etc as episode info
"includes": ["season", "episode", "video_codec", "audio_codec"],
"enforce_list": True,
},
)
def _get_subtitle_from_item(item, language, episode_number=None):
release_infos = []
try:
release_infos.append(item.find("td", {"class": "a6"}).text.strip())
except (AttributeError, KeyError):
pass
try:
release_infos.append(
item.find("td", {"class": "a1"}).find_all("span")[-1].text.strip()
)
except (AttributeError, KeyError):
pass
release_info = "".join(r_info for r_info in release_infos if r_info)
try:
path = item.find("td", {"class": "a1"}).find("a")["href"]
except (AttributeError, KeyError):
logger.debug("Couldn't get path: %s", item)
return None
return SubsceneSubtitle(language, _BASE_URL + path, release_info, episode_number)

View File

@ -110,7 +110,7 @@ class SubsSabBzSubtitle(Subtitle):
guess_filename = guessit(self.filename, video.hints)
matches |= guess_matches(video, guess_filename)
if isinstance(video, Movie) and (self.num_cds > 1 or 'cd' in guess_filename):
if isinstance(video, Movie) and ((isinstance(self.num_cds, int) and self.num_cds > 1) or 'cd' in guess_filename):
# reduce score of subtitles for multi-disc movie releases
return set()

View File

@ -108,7 +108,7 @@ class SubsUnacsSubtitle(Subtitle):
guess_filename = guessit(self.filename, video.hints)
matches |= guess_matches(video, guess_filename)
if isinstance(video, Movie) and (self.num_cds > 1 or 'cd' in guess_filename):
if isinstance(video, Movie) and ((isinstance(self.num_cds, int) and self.num_cds > 1) or 'cd' in guess_filename):
# reduce score of subtitles for multi-disc movie releases
return set()

View File

@ -169,7 +169,7 @@ def whisper_get_language_reverse(alpha3):
lan = whisper_get_language(wl, whisper_languages[wl])
if lan.alpha3 == alpha3:
return wl
raise ValueError
return None
def language_from_alpha3(lang):
name = Language(lang).name
@ -317,7 +317,7 @@ class WhisperAIProvider(Provider):
if out == None:
logger.info(f"Whisper cannot process {subtitle.video.original_path} because of missing/bad audio track")
subtitle.content = None
return
return
logger.debug(f'Audio stream length (in WAV format) is {len(out):,} bytes')
@ -326,11 +326,23 @@ class WhisperAIProvider(Provider):
else:
output_language = "eng"
input_language = whisper_get_language_reverse(subtitle.audio_language)
if input_language is None:
if output_language == "eng":
# guess that audio track is mislabelled English and let whisper try to transcribe it
input_language = "en"
subtitle.task = "transcribe"
logger.info(f"Whisper treating unsupported audio track language: '{subtitle.audio_language}' as English")
else:
logger.info(f"Whisper cannot process {subtitle.video.original_path} because of unsupported audio track language: '{subtitle.audio_language}'")
subtitle.content = None
return
logger.info(f'Starting WhisperAI {subtitle.task} to {language_from_alpha3(output_language)} for {subtitle.video.original_path}')
startTime = time.time()
r = self.session.post(f"{self.endpoint}/asr",
params={'task': subtitle.task, 'language': whisper_get_language_reverse(subtitle.audio_language), 'output': 'srt', 'encode': 'false'},
params={'task': subtitle.task, 'language': input_language, 'output': 'srt', 'encode': 'false'},
files={'audio_file': out},
timeout=(self.response, self.timeout))

View File

@ -1,299 +0,0 @@
# -*- coding: utf-8 -*-
# vim: fenc=utf-8 ts=4 et sw=4 sts=4
# This file is part of Subscene-API.
#
# Subscene-API is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Subscene-API is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
Python wrapper for Subscene subtitle database.
since Subscene doesn't provide an official API, I wrote
this script that does the job by parsing the website"s pages.
"""
# imports
import re
import enum
import sys
import requests
import time
import logging
is_PY2 = sys.version_info[0] < 3
if is_PY2:
from contextlib2 import suppress
from urllib2 import Request, urlopen
else:
from contextlib import suppress
from urllib.request import Request, urlopen
from dogpile.cache.api import NO_VALUE
from subliminal.cache import region
from bs4 import BeautifulSoup, NavigableString
logger = logging.getLogger(__name__)
# constants
HEADERS = {
}
SITE_DOMAIN = "https://subscene.com"
DEFAULT_USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWeb"\
"Kit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36"
ENDPOINT_RE = re.compile(r'(?uis)<form.+?action="/subtitles/(.+)">.*?<input type="text"')
class NewEndpoint(Exception):
pass
# utils
def soup_for(url, data=None, session=None, user_agent=DEFAULT_USER_AGENT):
url = re.sub(r"\s", "+", url)
if not session:
r = Request(url, data=None, headers=dict(HEADERS, **{"User-Agent": user_agent}))
html = urlopen(r).read().decode("utf-8")
else:
ret = session.post(url, data=data)
ret.raise_for_status()
html = ret.text
return BeautifulSoup(html, "html.parser")
class AttrDict(object):
def __init__(self, *attrs):
self._attrs = attrs
for attr in attrs:
setattr(self, attr, "")
def to_dict(self):
return {k: getattr(self, k) for k in self._attrs}
# models
@enum.unique
class SearchTypes(enum.Enum):
Exact = 1
TvSerie = 2
Popular = 3
Close = 4
SectionsParts = {
SearchTypes.Exact: "Exact",
SearchTypes.TvSerie: "TV-Series",
SearchTypes.Popular: "Popular",
SearchTypes.Close: "Close"
}
class Subtitle(object):
def __init__(self, title, url, language, owner_username, owner_url,
description, hearing_impaired):
self.title = title
self.url = url
self.language = language
self.owner_username = owner_username
self.owner_url = owner_url
self.description = description
self.hearing_impaired = hearing_impaired
self._zipped_url = None
def __str__(self):
return self.title
@classmethod
def from_rows(cls, rows):
subtitles = []
for row in rows:
if row.td.a is not None and row.td.get("class", ["lazy"])[0] != "empty":
subtitles.append(cls.from_row(row))
return subtitles
@classmethod
def from_row(cls, row):
attrs = AttrDict("title", "url", "language", "owner_username",
"owner_url", "description", "hearing_impaired")
with suppress(Exception):
attrs.title = row.find("td", "a1").a.find_all("span")[1].text \
.strip()
with suppress(Exception):
attrs.url = SITE_DOMAIN + row.find("td", "a1").a.get("href")
with suppress(Exception):
attrs.language = row.find("td", "a1").a.find_all("span")[0].text \
.strip()
with suppress(Exception):
attrs.owner_username = row.find("td", "a5").a.text.strip()
with suppress(Exception):
attrs.owner_page = SITE_DOMAIN + row.find("td", "a5").a \
.get("href").strip()
with suppress(Exception):
attrs.description = row.find("td", "a6").div.text.strip()
with suppress(Exception):
attrs.hearing_impaired = bool(row.find("td", "a41"))
return cls(**attrs.to_dict())
@classmethod
def get_zipped_url(cls, url, session=None):
soup = soup_for(url, session=session)
return SITE_DOMAIN + soup.find("div", "download").a.get("href")
@property
def zipped_url(self):
if self._zipped_url:
return self._zipped_url
self._zipped_url = Subtitle.get_zipped_url(self.url)
return self._zipped_url
class Film(object):
def __init__(self, title, year=None, imdb=None, cover=None,
subtitles=None):
self.title = title
self.year = year
self.imdb = imdb
self.cover = cover
self.subtitles = subtitles
def __str__(self):
return self.title
@classmethod
def from_url(cls, url, session=None):
soup = soup_for(url, session=session)
content = soup.find("div", "subtitles")
header = content.find("div", "box clearfix")
cover = None
try:
cover = header.find("div", "poster").img.get("src")
except AttributeError:
pass
title = header.find("div", "header").h2.text[:-12].strip()
imdb = header.find("div", "header").h2.find("a", "imdb").get("href")
year = header.find("div", "header").ul.li.text
year = int(re.findall(r"[0-9]+", year)[0])
rows = content.find("table").tbody.find_all("tr")
subtitles = Subtitle.from_rows(rows)
return cls(title, year, imdb, cover, subtitles)
# functions
def section_exists(soup, section):
tag_part = SectionsParts[section]
try:
headers = soup.find("div", "search-result").find_all("h2")
except AttributeError:
return False
for header in headers:
if tag_part in header.text:
return True
return False
def get_first_film(soup, section, year=None, session=None):
tag_part = SectionsParts[section]
tag = None
headers = soup.find("div", "search-result").find_all("h2")
for header in headers:
if tag_part in header.text:
tag = header
break
if not tag:
return
url = None
url = SITE_DOMAIN + tag.findNext("ul").find("li").div.a.get("href")
for t in tag.findNext("ul").findAll("li"):
if isinstance(t, NavigableString) or not t.div:
continue
if str(year) in t.div.a.string:
url = SITE_DOMAIN + t.div.a.get("href")
break
return Film.from_url(url, session=session)
def find_endpoint(session, content=None):
endpoint = region.get("subscene_endpoint2")
if endpoint is NO_VALUE:
if not content:
content = session.get(SITE_DOMAIN).text
m = ENDPOINT_RE.search(content)
if m:
endpoint = m.group(1).strip()
logger.debug("Switching main endpoint to %s", endpoint)
region.set("subscene_endpoint2", endpoint)
return endpoint
def search(term, release=True, session=None, year=None, limit_to=SearchTypes.Exact, throttle=0):
# note to subscene: if you actually start to randomize the endpoint, we'll have to query your server even more
if release:
endpoint = "release"
else:
endpoint = find_endpoint(session)
time.sleep(throttle)
if not endpoint:
logger.error("Couldn't find endpoint, exiting")
return
soup = soup_for("%s/subtitles/%s" % (SITE_DOMAIN, endpoint), data={"query": term},
session=session)
if soup:
if "Subtitle search by" in str(soup):
rows = soup.find("table").tbody.find_all("tr")
subtitles = Subtitle.from_rows(rows)
return Film(term, subtitles=subtitles)
for junk, search_type in SearchTypes.__members__.items():
if section_exists(soup, search_type):
return get_first_film(soup, search_type, year=year, session=session)
if limit_to == search_type:
return

View File

@ -30,10 +30,10 @@
"@fortawesome/free-solid-svg-icons": "^6.5.2",
"@fortawesome/react-fontawesome": "^0.2.0",
"@testing-library/jest-dom": "^6.4.2",
"@testing-library/react": "^14.3.0",
"@testing-library/react": "^15.0.5",
"@testing-library/user-event": "^14.5.2",
"@types/jest": "^29.5.12",
"@types/lodash": "^4.17.0",
"@types/lodash": "^4.17.1",
"@types/node": "^20.12.6",
"@types/react": "^18.2.75",
"@types/react-dom": "^18.2.24",
@ -49,12 +49,11 @@
"husky": "^9.0.11",
"jsdom": "^24.0.0",
"lodash": "^4.17.21",
"moment": "^2.30.1",
"prettier": "^3.2.5",
"prettier-plugin-organize-imports": "^3.2.4",
"pretty-quick": "^4.0.0",
"react-table": "^7.8.0",
"recharts": "^2.12.4",
"recharts": "^2.12.6",
"sass": "^1.74.1",
"typescript": "^5.4.4",
"vite": "^5.2.8",
@ -3579,7 +3578,6 @@
"resolved": "https://registry.npmjs.org/@testing-library/dom/-/dom-10.0.0.tgz",
"integrity": "sha512-PmJPnogldqoVFf+EwbHvbBJ98MmqASV8kLrBYgsDNxQcFMeIS7JFL48sfyXvuMtgmWO/wMhh25odr+8VhDmn4g==",
"dev": true,
"peer": true,
"dependencies": {
"@babel/code-frame": "^7.10.4",
"@babel/runtime": "^7.12.5",
@ -3659,51 +3657,23 @@
"dev": true
},
"node_modules/@testing-library/react": {
"version": "14.3.0",
"resolved": "https://registry.npmjs.org/@testing-library/react/-/react-14.3.0.tgz",
"integrity": "sha512-AYJGvNFMbCa5vt1UtDCa/dcaABrXq8gph6VN+cffIx0UeA0qiGqS+sT60+sb+Gjc8tGXdECWYQgaF0khf8b+Lg==",
"version": "15.0.5",
"resolved": "https://registry.npmjs.org/@testing-library/react/-/react-15.0.5.tgz",
"integrity": "sha512-ttodVWYA2i2w4hRa6krKrmS1vKxAEkwDz34y+CwbcrbZUxFzUYN3a5xZyFKo+K6LBseCRCUkwcjATpaNn/UsIA==",
"dev": true,
"dependencies": {
"@babel/runtime": "^7.12.5",
"@testing-library/dom": "^9.0.0",
"@testing-library/dom": "^10.0.0",
"@types/react-dom": "^18.0.0"
},
"engines": {
"node": ">=14"
"node": ">=18"
},
"peerDependencies": {
"react": "^18.0.0",
"react-dom": "^18.0.0"
}
},
"node_modules/@testing-library/react/node_modules/@testing-library/dom": {
"version": "9.3.4",
"resolved": "https://registry.npmjs.org/@testing-library/dom/-/dom-9.3.4.tgz",
"integrity": "sha512-FlS4ZWlp97iiNWig0Muq8p+3rVDjRiYE+YKGbAqXOu9nwJFFOdL00kFpz42M+4huzYi86vAK1sOOfyOG45muIQ==",
"dev": true,
"dependencies": {
"@babel/code-frame": "^7.10.4",
"@babel/runtime": "^7.12.5",
"@types/aria-query": "^5.0.1",
"aria-query": "5.1.3",
"chalk": "^4.1.0",
"dom-accessibility-api": "^0.5.9",
"lz-string": "^1.5.0",
"pretty-format": "^27.0.2"
},
"engines": {
"node": ">=14"
}
},
"node_modules/@testing-library/react/node_modules/aria-query": {
"version": "5.1.3",
"resolved": "https://registry.npmjs.org/aria-query/-/aria-query-5.1.3.tgz",
"integrity": "sha512-R5iJ5lkuHybztUfuOAznmboyjWq8O6sqNqtK7CLOqdydi54VNbORp49mb14KbWgG1QD3JFO9hJdZ+y4KutfdOQ==",
"dev": true,
"dependencies": {
"deep-equal": "^2.0.5"
}
},
"node_modules/@testing-library/user-event": {
"version": "14.5.2",
"resolved": "https://registry.npmjs.org/@testing-library/user-event/-/user-event-14.5.2.tgz",
@ -3912,9 +3882,9 @@
"dev": true
},
"node_modules/@types/lodash": {
"version": "4.17.0",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.17.0.tgz",
"integrity": "sha512-t7dhREVv6dbNj0q17X12j7yDG4bD/DHYX7o5/DbDxobP0HnGPgpRz2Ej77aL7TZT3DSw13fqUTj8J4mMnqa7WA==",
"version": "4.17.1",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.17.1.tgz",
"integrity": "sha512-X+2qazGS3jxLAIz5JDXDzglAF3KpijdhFxlf/V1+hEsOUc+HnWi81L/uv/EvGuV90WY+7mPGFCUDGfQC3Gj95Q==",
"dev": true
},
"node_modules/@types/node": {
@ -5626,38 +5596,6 @@
"node": ">=6"
}
},
"node_modules/deep-equal": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/deep-equal/-/deep-equal-2.2.3.tgz",
"integrity": "sha512-ZIwpnevOurS8bpT4192sqAowWM76JDKSHYzMLty3BZGSswgq6pBaH3DhCSW5xVAZICZyKdOBPjwww5wfgT/6PA==",
"dev": true,
"dependencies": {
"array-buffer-byte-length": "^1.0.0",
"call-bind": "^1.0.5",
"es-get-iterator": "^1.1.3",
"get-intrinsic": "^1.2.2",
"is-arguments": "^1.1.1",
"is-array-buffer": "^3.0.2",
"is-date-object": "^1.0.5",
"is-regex": "^1.1.4",
"is-shared-array-buffer": "^1.0.2",
"isarray": "^2.0.5",
"object-is": "^1.1.5",
"object-keys": "^1.1.1",
"object.assign": "^4.1.4",
"regexp.prototype.flags": "^1.5.1",
"side-channel": "^1.0.4",
"which-boxed-primitive": "^1.0.2",
"which-collection": "^1.0.1",
"which-typed-array": "^1.1.13"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/deep-is": {
"version": "0.1.4",
"resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz",
@ -5926,26 +5864,6 @@
"node": ">= 0.4"
}
},
"node_modules/es-get-iterator": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/es-get-iterator/-/es-get-iterator-1.1.3.tgz",
"integrity": "sha512-sPZmqHBe6JIiTfN5q2pEi//TwxmAFHwj/XEuYjTuse78i8KxaqMTTzxPoFKuzRpDpTJ+0NAbpfenkmH2rePtuw==",
"dev": true,
"dependencies": {
"call-bind": "^1.0.2",
"get-intrinsic": "^1.1.3",
"has-symbols": "^1.0.3",
"is-arguments": "^1.1.1",
"is-map": "^2.0.2",
"is-set": "^2.0.2",
"is-string": "^1.0.7",
"isarray": "^2.0.5",
"stop-iteration-iterator": "^1.0.0"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/es-iterator-helpers": {
"version": "1.0.18",
"resolved": "https://registry.npmjs.org/es-iterator-helpers/-/es-iterator-helpers-1.0.18.tgz",
@ -7306,22 +7224,6 @@
"loose-envify": "^1.0.0"
}
},
"node_modules/is-arguments": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/is-arguments/-/is-arguments-1.1.1.tgz",
"integrity": "sha512-8Q7EARjzEnKpt/PCD7e1cgUS0a6X8u5tdSiMqXhojOdoV9TsMsiO+9VLC5vAmO8N7/GmXn7yjR8qnA6bVAEzfA==",
"dev": true,
"dependencies": {
"call-bind": "^1.0.2",
"has-tostringtag": "^1.0.0"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/is-array-buffer": {
"version": "3.0.4",
"resolved": "https://registry.npmjs.org/is-array-buffer/-/is-array-buffer-3.0.4.tgz",
@ -8422,15 +8324,6 @@
"ufo": "^1.3.2"
}
},
"node_modules/moment": {
"version": "2.30.1",
"resolved": "https://registry.npmjs.org/moment/-/moment-2.30.1.tgz",
"integrity": "sha512-uEmtNhbDOrWPFS+hdjFCBfy9f2YoyzRpwcl+DqpC6taX21FzsTLQVbMV/W7PzNSX6x/bhC1zA3c2UQ5NzH6how==",
"dev": true,
"engines": {
"node": "*"
}
},
"node_modules/mri": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/mri/-/mri-1.2.0.tgz",
@ -8542,22 +8435,6 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/object-is": {
"version": "1.1.6",
"resolved": "https://registry.npmjs.org/object-is/-/object-is-1.1.6.tgz",
"integrity": "sha512-F8cZ+KfGlSGi09lJT7/Nd6KJZ9ygtvYC0/UYYLI9nmQKLMnydpB9yvbv9K1uSkEu7FU9vYPmVwLg328tX+ot3Q==",
"dev": true,
"dependencies": {
"call-bind": "^1.0.7",
"define-properties": "^1.2.1"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/object-keys": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
@ -9330,9 +9207,9 @@
}
},
"node_modules/recharts": {
"version": "2.12.4",
"resolved": "https://registry.npmjs.org/recharts/-/recharts-2.12.4.tgz",
"integrity": "sha512-dM4skmk4fDKEDjL9MNunxv6zcTxePGVEzRnLDXALRpfJ85JoQ0P0APJ/CoJlmnQI0gPjBlOkjzrwrfQrRST3KA==",
"version": "2.12.6",
"resolved": "https://registry.npmjs.org/recharts/-/recharts-2.12.6.tgz",
"integrity": "sha512-D+7j9WI+D0NHauah3fKHuNNcRK8bOypPW7os1DERinogGBGaHI7i6tQKJ0aUF3JXyBZ63dyfKIW2WTOPJDxJ8w==",
"dev": true,
"dependencies": {
"clsx": "^2.0.0",
@ -9875,18 +9752,6 @@
"integrity": "sha512-JPbdCEQLj1w5GilpiHAx3qJvFndqybBysA3qUOnznweH4QbNYUsW/ea8QzSrnh0vNsezMMw5bcVool8lM0gwzg==",
"dev": true
},
"node_modules/stop-iteration-iterator": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/stop-iteration-iterator/-/stop-iteration-iterator-1.0.0.tgz",
"integrity": "sha512-iCGQj+0l0HOdZ2AEeBADlsRC+vsnDsZsbdSiH1yNSjcfKM7fdpCMfqAL/dwF5BLiw/XhRft/Wax6zQbhq2BcjQ==",
"dev": true,
"dependencies": {
"internal-slot": "^1.0.4"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/string-natural-compare": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/string-natural-compare/-/string-natural-compare-3.0.1.tgz",

View File

@ -34,10 +34,10 @@
"@fortawesome/free-solid-svg-icons": "^6.5.2",
"@fortawesome/react-fontawesome": "^0.2.0",
"@testing-library/jest-dom": "^6.4.2",
"@testing-library/react": "^14.3.0",
"@testing-library/react": "^15.0.5",
"@testing-library/user-event": "^14.5.2",
"@types/jest": "^29.5.12",
"@types/lodash": "^4.17.0",
"@types/lodash": "^4.17.1",
"@types/node": "^20.12.6",
"@types/react": "^18.2.75",
"@types/react-dom": "^18.2.24",
@ -53,12 +53,11 @@
"husky": "^9.0.11",
"jsdom": "^24.0.0",
"lodash": "^4.17.21",
"moment": "^2.30.1",
"prettier": "^3.2.5",
"prettier-plugin-organize-imports": "^3.2.4",
"pretty-quick": "^4.0.0",
"react-table": "^7.8.0",
"recharts": "^2.12.4",
"recharts": "^2.12.6",
"sass": "^1.74.1",
"typescript": "^5.4.4",
"vite": "^5.2.8",
@ -77,7 +76,7 @@
"test:ui": "vitest --ui",
"coverage": "vitest run --coverage",
"format": "prettier -w .",
"prepare": "cd .. && husky install frontend/.husky"
"prepare": "cd .. && husky frontend/.husky"
},
"browserslist": {
"production": [

View File

@ -374,7 +374,6 @@ export const ProviderList: Readonly<ProviderInfo[]> = [
{
key: "subf2m",
name: "subf2m.co",
description: "Subscene Alternative Provider",
inputs: [
{
type: "switch",
@ -406,20 +405,6 @@ export const ProviderList: Readonly<ProviderInfo[]> = [
description:
"Greek Subtitles Provider.\nRequires anti-captcha provider to solve captchas for each download.",
},
{
key: "subscene",
inputs: [
{
type: "text",
key: "username",
},
{
type: "password",
key: "password",
},
],
description: "Broken, may not work for some. Use subf2m instead.",
},
{ key: "subscenter", description: "Hebrew Subtitles Provider" },
{
key: "subsunacs",

View File

@ -20,7 +20,6 @@ import {
Text,
} from "@mantine/core";
import { useDocumentTitle } from "@mantine/hooks";
import moment from "moment";
import {
FunctionComponent,
PropsWithChildren,
@ -28,6 +27,13 @@ import {
useCallback,
useState,
} from "react";
import {
divisorDay,
divisorHour,
divisorMinute,
divisorSecond,
formatTime,
} from "@/utilities/time";
import Table from "./table";
interface InfoProps {
@ -98,15 +104,19 @@ const SystemStatusView: FunctionComponent = () => {
const update = useCallback(() => {
const startTime = status?.start_time;
if (startTime) {
const duration = moment.duration(
moment().utc().unix() - startTime,
"seconds",
),
days = duration.days(),
hours = duration.hours().toString().padStart(2, "0"),
minutes = duration.minutes().toString().padStart(2, "0"),
seconds = duration.seconds().toString().padStart(2, "0");
setUptime(days + "d " + hours + ":" + minutes + ":" + seconds);
// Current time in seconds
const currentTime = Math.floor(Date.now() / 1000);
const uptimeInSeconds = currentTime - startTime;
const uptime: string = formatTime(uptimeInSeconds, [
{ unit: "d", divisor: divisorDay },
{ unit: "h", divisor: divisorHour },
{ unit: "m", divisor: divisorMinute },
{ unit: "s", divisor: divisorSecond },
]);
setUptime(uptime);
}
}, [status?.start_time]);

View File

@ -20,7 +20,6 @@ interface Settings {
xsubs: Settings.XSubs;
assrt: Settings.Assrt;
napisy24: Settings.Napisy24;
subscene: Settings.Subscene;
betaseries: Settings.Betaseries;
titlovi: Settings.Titlovi;
ktuvit: Settings.Ktuvit;
@ -211,8 +210,6 @@ declare namespace Settings {
interface Napisy24 extends BaseProvider {}
interface Subscene extends BaseProvider {}
interface Titlovi extends BaseProvider {}
interface Ktuvit {

View File

@ -0,0 +1,60 @@
import {
divisorDay,
divisorHour,
divisorMinute,
divisorSecond,
formatTime,
} from "./time";
describe("formatTime", () => {
it("should format day hour minute and second", () => {
const uptimeInSeconds = 3661;
const formattedTime = formatTime(uptimeInSeconds, [
{ unit: "d", divisor: divisorDay },
{ unit: "h", divisor: divisorHour },
{ unit: "m", divisor: divisorMinute },
{ unit: "s", divisor: divisorSecond },
]);
expect(formattedTime).toBe("0d 01:01:01");
});
it("should format multiple digits of days", () => {
const uptimeInSeconds = 50203661;
const formattedTime = formatTime(uptimeInSeconds, [
{ unit: "d", divisor: divisorDay },
{ unit: "h", divisor: divisorHour },
{ unit: "m", divisor: divisorMinute },
{ unit: "s", divisor: divisorSecond },
]);
expect(formattedTime).toBe("581d 25:27:41");
});
it("should format time day hour minute", () => {
const uptimeInSeconds = 3661;
const formattedTime = formatTime(uptimeInSeconds, [
{ unit: "d", divisor: divisorDay },
{ unit: "h", divisor: divisorHour },
{ unit: "m", divisor: divisorMinute },
]);
expect(formattedTime).toBe("0d 01:01");
});
it("should format zero uptime", () => {
const uptimeInSeconds = 0;
const formattedTime = formatTime(uptimeInSeconds, [
{ unit: "d", divisor: divisorDay },
{ unit: "h", divisor: divisorHour },
{ unit: "m", divisor: divisorMinute },
{ unit: "s", divisor: divisorSecond },
]);
expect(formattedTime).toBe("0d 00:00:00");
});
});

View File

@ -0,0 +1,29 @@
interface TimeFormat {
unit: string;
divisor: number;
}
export const divisorDay = 24 * 60 * 60;
export const divisorHour = 60 * 60;
export const divisorMinute = 60;
export const divisorSecond = 1;
export const formatTime = (
timeInSeconds: number,
formats: TimeFormat[],
): string =>
formats.reduce(
(formattedTime: string, { unit, divisor }: TimeFormat, index: number) => {
const timeValue: number =
index === 0
? Math.floor(timeInSeconds / divisor)
: Math.floor(timeInSeconds / divisor) % 60;
return (
formattedTime +
(index === 0
? `${timeValue}${unit} `
: `${timeValue.toString().padStart(2, "0")}${index < formats.length - 1 ? ":" : ""}`)
);
},
"",
);

View File

@ -1,6 +1,7 @@
from flask import current_app
from alembic import context
from sqlalchemy import text
import logging
@ -95,8 +96,22 @@ def run_migrations_online():
)
with context.begin_transaction():
bind = context.get_bind()
if bind.engine.name == 'sqlite':
bind.execute(text("PRAGMA foreign_keys=OFF;"))
elif bind.engine.name == 'postgresql':
bind.execute(text("SET CONSTRAINTS ALL DEFERRED;"))
context.run_migrations()
if bind.engine.name == 'sqlite':
bind.execute(text("PRAGMA foreign_keys=ON;"))
elif bind.engine.name == 'postgresql':
bind.execute(text("SET CONSTRAINTS ALL IMMEDIATE;"))
bind.close()
if context.is_offline_mode():
run_migrations_offline()

View File

@ -0,0 +1,46 @@
"""empty message
Revision ID: 452dd0f0b578
Revises: 30f37e2e15e1
Create Date: 2024-05-06 20:27:15.618027
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '452dd0f0b578'
down_revision = '30f37e2e15e1'
branch_labels = None
depends_on = None
bind = op.get_context().bind
insp = sa.inspect(bind)
def column_exists(table_name, column_name):
columns = insp.get_columns(table_name)
return any(c["name"] == column_name for c in columns)
def upgrade():
if column_exists('table_shows', 'alternativeTitle'):
with op.batch_alter_table('table_shows', schema=None) as batch_op:
batch_op.drop_column('alternativeTitle')
if not column_exists('table_languages_profiles', 'originalFormat'):
with op.batch_alter_table('table_languages_profiles', schema=None) as batch_op:
batch_op.add_column(sa.Column('originalFormat', sa.Integer(), server_default='0'))
if not column_exists('table_languages_profiles', 'mustContain'):
with op.batch_alter_table('table_languages_profiles', schema=None) as batch_op:
batch_op.add_column(sa.Column('mustContain', sa.Text(), server_default='[]'))
if not column_exists('table_languages_profiles', 'mustNotContain'):
with op.batch_alter_table('table_languages_profiles', schema=None) as batch_op:
batch_op.add_column(sa.Column('mustNotContain', sa.Text(), server_default='[]'))
def downgrade():
pass