12 Commits

Author SHA1 Message Date
a5f864b802 Merge pull request 'develop' (#12) from develop into master
Reviewed-on: #12
2023-09-30 19:36:35 +00:00
5e9fe86cd6 Merge pull request 'develop' (#11) from develop into master
Reviewed-on: #11
2023-09-19 01:11:23 +00:00
da70244a54 Merge pull request 'Externalized items to new glade files and classes' (#10) from develop into master
Reviewed-on: #10
2022-12-04 08:49:44 +00:00
c01e81af27 Merge pull request 'Merge Stable Changesto Master' (#9) from develop into master
Reviewed-on: #9
2022-11-29 04:58:09 +00:00
41f39ba8cc Merge pull request 'Plugin rework/fixes' (#8) from develop into master
Reviewed-on: #8
2022-09-06 02:24:35 +00:00
bebe0c7cba Merge pull request 'plugin work' (#7) from develop into master
Reviewed-on: #7
2022-09-05 06:09:39 +00:00
74d53690e2 Merge pull request 'Updated VOD Thumbnailer, added icon gen logic' (#6) from develop into master
Reviewed-on: #6
2022-09-03 22:03:24 +00:00
061dbf19ad Merge pull request 'develop' (#5) from develop into master
Reviewed-on: #5
2022-09-03 05:46:08 +00:00
1798213bfc Merge pull request 'develop' (#4) from develop into master
Reviewed-on: #4
2022-09-02 03:25:44 +00:00
6bd4d97db2 Merge pull request 'Bringing to latest changes' (#3) from develop into master
Reviewed-on: #3
2022-07-16 19:14:29 +00:00
7737e3ad6d Merge pull request 'develop' (#2) from develop into master
Reviewed-on: #2
2022-01-31 02:34:18 +00:00
3c914e64dd Merge pull request 'develop' (#1) from develop into master
Reviewed-on: #1
2022-01-30 00:22:36 +00:00
1450 changed files with 46143 additions and 62941 deletions

View File

@@ -8,7 +8,7 @@ Additionally, if not building a .deb then just move the contents of user_config
Copy the share/solarfm folder to your user .config/ directory too. Copy the share/solarfm folder to your user .config/ directory too.
`pyrightconfig.json` `pyrightconfig.json`
<p>The pyrightconfig file needs to stay on same level as the .git folders in order to have settings detected when using pyright with lsp functionality. "pyrightconfig.json" can prompt IDEs such as Zed on settings to use and where imports are located- look at venvPath and venv. "venvPath" is parent path of "venv" where "venv" is just the name of the folder under the parent path that is the python created venv. <p>The pyrightconfig file needs to stay on same level as the .git folders in order to have settings detected when using pyright with lsp functionality.</p>
<h6>Install Setup</h6> <h6>Install Setup</h6>
``` ```
@@ -17,7 +17,7 @@ sudo apt-get install xclip python3.8 python3-setproctitle python3-gi wget ffmpeg
# Known Issues # Known Issues
<ul> <ul>
<li>The memory leak has now been mostly mitigated in that it doesn't balloon to 2GB'. Still working out where the smaller one(s) is/are.</li> <li>There's a memory leak. Still analyzing where exactly.</li>
<li>Doing Ctrl+D when in Terminator (maybe other terminals too) somehow propagates the signal to SolarFM too. <li>Doing Ctrl+D when in Terminator (maybe other terminals too) somehow propagates the signal to SolarFM too.
A selected file in the active quad-pane will move to trash since it is the default key-binding for that action.</li> A selected file in the active quad-pane will move to trash since it is the default key-binding for that action.</li>
</ul> </ul>
@@ -32,4 +32,4 @@ A selected file in the active quad-pane will move to trash since it is the defau
![1 SolarFM single pane. ](images/pic1.png) ![1 SolarFM single pane. ](images/pic1.png)
![2 SolarFM double pane. ](images/pic2.png) ![2 SolarFM double pane. ](images/pic2.png)
![3 SolarFM triple pane. ](images/pic3.png) ![3 SolarFM triple pane. ](images/pic3.png)
![4 SolarFM quad pane. ](images/pic4.png) ![4 SolarFM quad pane. ](images/pic4.png)

View File

@@ -1,13 +0,0 @@
# Netscape HTTP Cookie File
# This file is generated by yt-dlp. Do not edit.
.youtube.com TRUE / FALSE 0 PREF hl=en&tz=UTC
.youtube.com TRUE / TRUE 0 SOCS CAI
.youtube.com TRUE / TRUE 1746228332 GPS 1
.youtube.com TRUE / TRUE 0 YSC WmwjDInItf4
.youtube.com TRUE / TRUE 1761778533 __Secure-ROLLOUT_TOKEN CO_R6PStoK3FQRCTgL7h8IWNAxi2uuTh8IWNAw%3D%3D
.youtube.com TRUE / TRUE 1761778533 VISITOR_INFO1_LIVE 9VALBXb6AdM
.youtube.com TRUE / TRUE 1761778533 VISITOR_PRIVACY_METADATA CgJVUxIEGgAgOQ%3D%3D
.youtube.com TRUE / TRUE 1761778533 YT_DEVICE_MEASUREMENT_ID TWV9k30=
.youtube.com TRUE / TRUE 1809298533 __Secure-YT_TVFAS t=485062&s=2
.youtube.com TRUE / TRUE 1761778533 DEVICE_INFO ChxOelE1T1RrNE5UZzFNRGd5TURJM09UVTROdz09EOWa1cAGGOWa1cAG

View File

@@ -1,6 +0,0 @@
#!/bin/bash
function main() {
sudo chown -R "${USER}":"${USER}" .
}
main;

View File

@@ -1 +0,0 @@
Place the zipped up solarfm zip here amnd run the build script.

View File

@@ -1,968 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Generated with glade 3.40.0 -->
<interface>
<requires lib="gtk+" version="3.22"/>
<object class="GtkImage" id="image1">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-save-as</property>
</object>
<object class="GtkImage" id="image2">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-file</property>
</object>
<object class="GtkImage" id="image3">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-justify-center</property>
</object>
<object class="GtkImage" id="image4">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-save</property>
</object>
<object class="GtkImage" id="image5">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-execute</property>
</object>
<object class="GtkImage" id="image6">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-redo</property>
</object>
<object class="GtkImage" id="image7">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-justify-center</property>
</object>
<object class="GtkImage" id="io_img">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-stop</property>
</object>
<object class="GtkImage" id="tggl_notebook_1_img">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-apply</property>
<property name="icon_size">3</property>
</object>
<object class="GtkImage" id="tggl_notebook_2_img">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-apply</property>
<property name="icon_size">3</property>
</object>
<object class="GtkImage" id="tggl_notebook_3_img">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-apply</property>
<property name="icon_size">3</property>
</object>
<object class="GtkImage" id="tggl_notebook_4_img">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="stock">gtk-apply</property>
<property name="icon_size">3</property>
</object>
<object class="GtkBox" id="core_widget">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="orientation">vertical</property>
<property name="baseline-position">top</property>
<child>
<object class="GtkBox" id="main_menu_bar">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkMenuBar">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkMenuItem">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="label" translatable="yes">_File</property>
<property name="use-underline">True</property>
<child type="submenu">
<object class="GtkMenu">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkImageMenuItem">
<property name="label">gtk-new</property>
<property name="name">create</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="tooltip-text" translatable="yes">New File/Folder...</property>
<property name="use-underline">True</property>
<property name="use-stock">True</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">gtk-open</property>
<property name="name">open</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="tooltip-text" translatable="yes">Open...</property>
<property name="use-underline">True</property>
<property name="use-stock">True</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
<child>
<object class="GtkSeparatorMenuItem">
<property name="visible">True</property>
<property name="can-focus">False</property>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">Reload Plugins</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="image">image6</property>
<property name="use-stock">False</property>
<signal name="button-release-event" handler="reload_plugins" swapped="no"/>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">Terminal</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="image">image5</property>
<property name="use-stock">False</property>
<signal name="button-release-event" handler="open_terminal" swapped="no"/>
</object>
</child>
<child>
<object class="GtkMenuItem">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="label" translatable="yes">Session</property>
<child type="submenu">
<object class="GtkMenu">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkImageMenuItem">
<property name="label">Save Session</property>
<property name="name">save_session</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="tooltip-text" translatable="yes">New File/Folder...</property>
<property name="image">image4</property>
<property name="use-stock">False</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">Save Session As</property>
<property name="name">save_session_as</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="tooltip-text" translatable="yes">New File/Folder...</property>
<property name="image">image1</property>
<property name="use-stock">False</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">Load Session</property>
<property name="name">load_session</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="tooltip-text" translatable="yes">New File/Folder...</property>
<property name="image">image2</property>
<property name="use-stock">False</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
</object>
</child>
</object>
</child>
<child>
<object class="GtkMenuItem">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="label" translatable="yes">Debug</property>
<child type="submenu">
<object class="GtkMenu">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkImageMenuItem">
<property name="label">Show Errors</property>
<property name="name">messages_popup</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="image">image3</property>
<property name="use-stock">False</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">Show Interactive Debugger</property>
<property name="name">ui_debug</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="image">image7</property>
<property name="use-stock">False</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
</object>
</child>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">gtk-quit</property>
<property name="name">tear_down</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="use-underline">True</property>
<property name="use-stock">True</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
</object>
</child>
</object>
</child>
<child>
<object class="GtkMenuItem">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="label" translatable="yes">_Edit</property>
<property name="use-underline">True</property>
<child type="submenu">
<object class="GtkMenu">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkImageMenuItem">
<property name="label">gtk-cut</property>
<property name="name">cut</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="use-underline">True</property>
<property name="use-stock">True</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">gtk-copy</property>
<property name="name">copy</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="use-underline">True</property>
<property name="use-stock">True</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
<child>
<object class="GtkImageMenuItem">
<property name="label">gtk-paste</property>
<property name="name">paste</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="use-underline">True</property>
<property name="use-stock">True</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
</object>
</child>
</object>
</child>
<child>
<object class="GtkMenuItem">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="label" translatable="yes">_Help</property>
<property name="use-underline">True</property>
<child type="submenu">
<object class="GtkMenu">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkImageMenuItem">
<property name="label">gtk-about</property>
<property name="name">about_page</property>
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="use-underline">True</property>
<property name="use-stock">True</property>
<signal name="button-release-event" handler="do_action_from_menu_controls" swapped="no"/>
</object>
</child>
</object>
</child>
</object>
</child>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkButtonBox" id="main_menu_bttn_box_bar">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="spacing">5</property>
<property name="layout-style">start</property>
<child>
<object class="GtkButton" id="plugins_button">
<property name="label" translatable="yes">Plugins</property>
<property name="name">plugins_popup</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<signal name="released" handler="do_action_from_menu_controls" swapped="no"/>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkToggleButton" id="tggl_notebook_1">
<property name="name">tggl_notebook_1</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="image">tggl_notebook_1_img</property>
<property name="always-show-image">True</property>
<signal name="released" handler="toggle_notebook_pane" swapped="no"/>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
<child>
<object class="GtkToggleButton" id="tggl_notebook_2">
<property name="name">tggl_notebook_2</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="image">tggl_notebook_2_img</property>
<property name="always-show-image">True</property>
<signal name="released" handler="toggle_notebook_pane" swapped="no"/>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">2</property>
</packing>
</child>
<child>
<object class="GtkToggleButton" id="tggl_notebook_3">
<property name="name">tggl_notebook_3</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="image">tggl_notebook_3_img</property>
<property name="always-show-image">True</property>
<signal name="released" handler="toggle_notebook_pane" swapped="no"/>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">3</property>
</packing>
</child>
<child>
<object class="GtkToggleButton" id="tggl_notebook_4">
<property name="name">tggl_notebook_4</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="image">tggl_notebook_4_img</property>
<property name="always-show-image">True</property>
<signal name="released" handler="toggle_notebook_pane" swapped="no"/>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">4</property>
</packing>
</child>
<child>
<object class="GtkButton" id="io_button">
<property name="label" translatable="yes">I/O</property>
<property name="name">io_popup</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="image">io_img</property>
<property name="always-show-image">True</property>
<signal name="released" handler="do_action_from_menu_controls" swapped="no"/>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">5</property>
</packing>
</child>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
<child>
<object class="GtkButton">
<property name="label" translatable="yes"> </property>
<property name="visible">True</property>
<property name="sensitive">False</property>
<property name="can-focus">False</property>
<property name="focus-on-click">False</property>
<property name="receives-default">False</property>
<property name="double-buffered">False</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">2</property>
</packing>
</child>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkBox" id="path_menu_bar">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkButton" id="go_home">
<property name="label">gtk-home</property>
<property name="name">go_home</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
<signal name="released" handler="do_action_from_bar_controls" swapped="no"/>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkButton" id="create_tab">
<property name="label">gtk-add</property>
<property name="name">create_tab</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
<signal name="released" handler="do_action_from_bar_controls" swapped="no"/>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
<child>
<object class="GtkButton" id="go_up">
<property name="label">gtk-go-up</property>
<property name="name">go_up</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
<signal name="released" handler="do_action_from_bar_controls" swapped="no"/>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">2</property>
</packing>
</child>
<child>
<object class="GtkEntry" id="path_entry">
<property name="name">path_entry</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="hexpand">True</property>
<property name="placeholder-text" translatable="yes">Path...</property>
<signal name="changed" handler="do_action_from_bar_controls" swapped="no"/>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">3</property>
</packing>
</child>
<child>
<object class="GtkButton" id="refresh_tab">
<property name="label">gtk-refresh</property>
<property name="name">refresh_view</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
<signal name="released" handler="do_action_from_bar_controls" swapped="no"/>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">4</property>
</packing>
</child>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
<child>
<object class="GtkPaned" id="pane_master">
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="hexpand">True</property>
<property name="vexpand">True</property>
<property name="orientation">vertical</property>
<property name="wide-handle">True</property>
<child>
<object class="GtkPaned" id="pane_top">
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="margin-bottom">5</property>
<property name="hexpand">True</property>
<property name="vexpand">True</property>
<property name="wide-handle">True</property>
<child>
<object class="GtkNotebook" id="window_1">
<property name="name">notebook1</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="can-default">True</property>
<property name="margin-left">5</property>
<property name="margin-right">5</property>
<property name="margin-top">5</property>
<property name="margin-bottom">5</property>
<property name="show-border">False</property>
<property name="scrollable">True</property>
<property name="group-name">sfm_windows</property>
<signal name="create-window" handler="on_tab_dnded" swapped="no"/>
<signal name="page-reordered" handler="on_tab_reorder" swapped="no"/>
<signal name="switch-page" handler="on_tab_switch_update" swapped="no"/>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child type="action-start">
<object class="GtkButtonBox">
<property name="can-focus">False</property>
<property name="layout-style">start</property>
<child>
<object class="GtkButton">
<property name="label">gtk-go-back</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkButton">
<property name="label">gtk-go-forward</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
<child type="action-end">
<object class="GtkSearchEntry" id="win1_search_field">
<property name="name">window_1</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="primary-icon-name">edit-find-symbolic</property>
<property name="primary-icon-activatable">False</property>
<property name="primary-icon-sensitive">False</property>
<property name="placeholder-text" translatable="yes">Search...</property>
<signal name="changed" handler="do_file_search" swapped="no"/>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
</object>
<packing>
<property name="resize">False</property>
<property name="shrink">False</property>
</packing>
</child>
<child>
<object class="GtkNotebook" id="window_2">
<property name="name">notebook2</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="can-default">True</property>
<property name="margin-left">5</property>
<property name="margin-right">5</property>
<property name="margin-top">5</property>
<property name="margin-bottom">5</property>
<property name="show-border">False</property>
<property name="scrollable">True</property>
<property name="group-name">sfm_windows</property>
<signal name="create-window" handler="on_tab_dnded" swapped="no"/>
<signal name="page-reordered" handler="on_tab_reorder" swapped="no"/>
<signal name="switch-page" handler="on_tab_switch_update" swapped="no"/>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child type="action-start">
<object class="GtkButtonBox">
<property name="can-focus">False</property>
<property name="layout-style">start</property>
<child>
<object class="GtkButton">
<property name="label">gtk-go-back</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkButton">
<property name="label">gtk-go-forward</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
<child type="action-end">
<object class="GtkSearchEntry" id="win2_search_field">
<property name="name">window_2</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="primary-icon-name">edit-find-symbolic</property>
<property name="primary-icon-activatable">False</property>
<property name="primary-icon-sensitive">False</property>
<property name="placeholder-text" translatable="yes">Search...</property>
<signal name="changed" handler="do_file_search" swapped="no"/>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
</object>
<packing>
<property name="resize">False</property>
<property name="shrink">False</property>
</packing>
</child>
</object>
<packing>
<property name="resize">True</property>
<property name="shrink">True</property>
</packing>
</child>
<child>
<object class="GtkPaned" id="pane_bottom">
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="margin-top">5</property>
<property name="hexpand">True</property>
<property name="vexpand">True</property>
<property name="wide-handle">True</property>
<child>
<object class="GtkNotebook" id="window_3">
<property name="name">notebook3</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="can-default">True</property>
<property name="margin-left">5</property>
<property name="margin-right">5</property>
<property name="margin-top">5</property>
<property name="margin-bottom">5</property>
<property name="show-border">False</property>
<property name="scrollable">True</property>
<property name="group-name">sfm_windows</property>
<signal name="create-window" handler="on_tab_dnded" swapped="no"/>
<signal name="page-reordered" handler="on_tab_reorder" swapped="no"/>
<signal name="switch-page" handler="on_tab_switch_update" swapped="no"/>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child type="action-start">
<object class="GtkButtonBox">
<property name="can-focus">False</property>
<property name="layout-style">start</property>
<child>
<object class="GtkButton">
<property name="label">gtk-go-back</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkButton">
<property name="label">gtk-go-forward</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
<child type="action-end">
<object class="GtkSearchEntry" id="win3_search_field">
<property name="name">window_3</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="primary-icon-name">edit-find-symbolic</property>
<property name="primary-icon-activatable">False</property>
<property name="primary-icon-sensitive">False</property>
<property name="placeholder-text" translatable="yes">Search...</property>
<signal name="changed" handler="do_file_search" swapped="no"/>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
</object>
<packing>
<property name="resize">False</property>
<property name="shrink">False</property>
</packing>
</child>
<child>
<object class="GtkNotebook" id="window_4">
<property name="name">notebook4</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="can-default">True</property>
<property name="margin-right">5</property>
<property name="margin-top">5</property>
<property name="margin-bottom">5</property>
<property name="show-border">False</property>
<property name="scrollable">True</property>
<property name="group-name">sfm_windows</property>
<signal name="create-window" handler="on_tab_dnded" swapped="no"/>
<signal name="page-reordered" handler="on_tab_reorder" swapped="no"/>
<signal name="switch-page" handler="on_tab_switch_update" swapped="no"/>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child>
<placeholder/>
</child>
<child type="tab">
<placeholder/>
</child>
<child type="action-start">
<object class="GtkButtonBox">
<property name="can-focus">False</property>
<property name="layout-style">start</property>
<child>
<object class="GtkButton">
<property name="label">gtk-go-back</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkButton">
<property name="label">gtk-go-forward</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<property name="use-stock">True</property>
<property name="always-show-image">True</property>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
<child type="action-end">
<object class="GtkSearchEntry" id="win4_search_field">
<property name="name">window_4</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="primary-icon-name">edit-find-symbolic</property>
<property name="primary-icon-activatable">False</property>
<property name="primary-icon-sensitive">False</property>
<property name="placeholder-text" translatable="yes">Search...</property>
<signal name="changed" handler="do_file_search" swapped="no"/>
</object>
<packing>
<property name="tab-fill">False</property>
</packing>
</child>
</object>
<packing>
<property name="resize">False</property>
<property name="shrink">False</property>
</packing>
</child>
</object>
<packing>
<property name="resize">True</property>
<property name="shrink">True</property>
</packing>
</child>
</object>
<packing>
<property name="expand">True</property>
<property name="fill">True</property>
<property name="position">2</property>
</packing>
</child>
<child>
<placeholder/>
</child>
</object>
</interface>

View File

@@ -1,50 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Generated with glade 3.40.0 -->
<interface>
<requires lib="gtk+" version="3.22"/>
<object class="GtkStatusbar" id="bottom_status_info">
<property name="visible">True</property>
<property name="can-focus">False</property>
<property name="margin-left">10</property>
<property name="margin-right">10</property>
<property name="margin-start">10</property>
<property name="margin-end">10</property>
<property name="margin-top">6</property>
<property name="margin-bottom">6</property>
<property name="spacing">15</property>
<property name="baseline-position">top</property>
<child>
<object class="GtkLabel" id="bottom_size_label">
<property name="visible">True</property>
<property name="can-focus">False</property>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">0</property>
</packing>
</child>
<child>
<object class="GtkLabel" id="bottom_file_count_label">
<property name="visible">True</property>
<property name="can-focus">False</property>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">1</property>
</packing>
</child>
<child>
<object class="GtkLabel" id="bottom_path_label">
<property name="visible">True</property>
<property name="can-focus">False</property>
</object>
<packing>
<property name="expand">False</property>
<property name="fill">True</property>
<property name="position">2</property>
</packing>
</child>
</object>
</interface>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

After

Width:  |  Height:  |  Size: 504 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 316 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 464 KiB

View File

@@ -14,7 +14,6 @@ class Manifest:
'ui_target': "plugin_control_list", 'ui_target': "plugin_control_list",
'pass_fm_events': "true" 'pass_fm_events': "true"
} }
pre_launch: bool = False
``` ```

View File

@@ -122,6 +122,7 @@ class Plugin(PluginBase):
uri = state.uris[0] uri = state.uris[0]
path = state.tab.get_current_directory() path = state.tab.get_current_directory()
properties = self._set_ui_data(uri, path) properties = self._set_ui_data(uri, path)
response = self._properties_dialog.run() response = self._properties_dialog.run()
if response in [Gtk.ResponseType.CANCEL, Gtk.ResponseType.DELETE_EVENT]: if response in [Gtk.ResponseType.CANCEL, Gtk.ResponseType.DELETE_EVENT]:
@@ -167,13 +168,13 @@ class Plugin(PluginBase):
def _set_ui_data(self, uri, path): def _set_ui_data(self, uri, path):
properties = Properties() properties = Properties()
file_info = Gio.File.new_for_path(uri).query_info(attributes = "standard::*,owner::*,time::access,time::changed", file_info = Gio.File.new_for_path(uri).query_info(attributes="standard::*,owner::*,time::access,time::changed",
flags = Gio.FileQueryInfoFlags.NONE, flags=Gio.FileQueryInfoFlags.NONE,
cancellable = None) cancellable=None)
is_symlink = file_info.get_attribute_as_string("standard::is-symlink") is_symlink = file_info.get_attribute_as_string("standard::is-symlink")
properties.file_uri = uri properties.file_uri = uri
properties.file_target = file_info.get_attribute_as_string("standard::symlink-target") if is_symlink in [True, "TRUE"] else "" properties.file_target = file_info.get_attribute_as_string("standard::symlink-target") if is_symlink else ""
properties.file_name = file_info.get_display_name() properties.file_name = file_info.get_display_name()
properties.file_location = path properties.file_location = path
properties.mime_type = file_info.get_content_type() properties.mime_type = file_info.get_content_type()
@@ -185,7 +186,7 @@ class Plugin(PluginBase):
# NOTE: Read = 4, Write = 2, Exec = 1 # NOTE: Read = 4, Write = 2, Exec = 1
command = ["stat", "-c", "%a", uri] command = ["stat", "-c", "%a", uri]
with subprocess.Popen(command, stdout = subprocess.PIPE) as proc: with subprocess.Popen(command, stdout=subprocess.PIPE) as proc:
properties.chmod_stat = list(proc.stdout.read().decode("UTF-8").strip()) properties.chmod_stat = list(proc.stdout.read().decode("UTF-8").strip())
owner = self._chmod_map[f"{properties.chmod_stat[0]}"] owner = self._chmod_map[f"{properties.chmod_stat[0]}"]
group = self._chmod_map[f"{properties.chmod_stat[1]}"] group = self._chmod_map[f"{properties.chmod_stat[1]}"]

View File

@@ -43,50 +43,6 @@ class FileSearchMixin:
self.reset_file_list_box() self.reset_file_list_box()
self.run_fsearch_watcher(query=widget) self.run_fsearch_watcher(query=widget)
# Need to implement this over the threaded stuffs....
def cancel_timer(self):
if self.timer:
self.timer.cancel()
GLib.idle_remove_by_data(None)
def delay_search_Glib(self):
GLib.idle_add(self._do_highlight)
def delay_search(self):
wait_time = self.search_time / len(self.search_text)
wait_time = max(wait_time, 0.05)
self.timer = threading.Timer(wait_time, self.delay_search_Glib)
self.timer.daemon = True
self.timer.start()
@daemon_threaded @daemon_threaded
def run_fsearch_watcher(self, query): def run_fsearch_watcher(self, query):
while True: while True:
@@ -148,4 +104,4 @@ class FileSearchMixin:
file = jdata[1] file = jdata[1]
widget = FilePreviewWidget(target, file) widget = FilePreviewWidget(target, file)
self._file_list.add(widget) self._file_list.add(widget)

View File

@@ -48,7 +48,7 @@ class GrepPreviewWidget(Gtk.Box):
return bytes(f"\n<span foreground='{color}'>{target}</span>", "utf-8").decode("utf-8") return bytes(f"\n<span foreground='{color}'>{target}</span>", "utf-8").decode("utf-8")
def make_utf8_line_highlight(self, buffer, itr, i, color, target, query): def make_utf8_line_highlight(self, buffer, itr, i, color, target, query):
parts = re.split(r"(?i)(" + query + ")", target.replace("\n", "")) parts = re.split(r"(" + query + ")(?i)", target.replace("\n", ""))
for part in parts: for part in parts:
itr = buffer.get_end_iter() itr = buffer.get_end_iter()
@@ -57,4 +57,4 @@ class GrepPreviewWidget(Gtk.Box):
else: else:
new_s = f"<span foreground='#000000' background='{color}'>{part}</span>" new_s = f"<span foreground='#000000' background='{color}'>{part}</span>"
_part = bytes(new_s, "utf-8").decode("utf-8") _part = bytes(new_s, "utf-8").decode("utf-8")
buffer.insert_markup(itr, _part, len(_part)) buffer.insert_markup(itr, _part, len(_part))

View File

@@ -8,7 +8,6 @@
"ui_target": "plugin_control_list", "ui_target": "plugin_control_list",
"pass_fm_events": "true", "pass_fm_events": "true",
"bind_keys": ["Example Plugin||send_message:<Control>f"] "bind_keys": ["Example Plugin||send_message:<Control>f"]
}, }
"pre_launch": "false"
} }
} }

View File

@@ -1,3 +0,0 @@
"""
Pligin Module
"""

View File

@@ -1,3 +0,0 @@
"""
Pligin Package
"""

View File

@@ -1,73 +0,0 @@
# Python imports
import json
import os
from os import path
# Lib imports
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
# Application imports
from .icon import Icon
class IconController(Icon):
def __init__(self):
CURRENT_PATH = os.path.dirname(os.path.realpath(__file__))
# NOTE: app_name should be defined using python 'builtins' and so too must be logger used in the various classes
app_name_exists = False
try:
app_name
app_name_exists = True
except Exception as e:
...
APP_CONTEXT = f"{app_name.lower()}" if app_name_exists else "shellfm"
USR_APP_CONTEXT = f"/usr/share/{APP_CONTEXT}"
USER_HOME = path.expanduser('~')
CONFIG_PATH = f"{USER_HOME}/.config/{APP_CONTEXT}"
self.DEFAULT_ICONS = f"{CONFIG_PATH}/icons"
self.DEFAULT_ICON = f"{self.DEFAULT_ICONS}/text.png"
self.FFMPG_THUMBNLR = f"{CONFIG_PATH}/ffmpegthumbnailer" # Thumbnail generator binary
self.BLENDER_THUMBNLR = f"{CONFIG_PATH}/blender-thumbnailer" # Blender thumbnail generator binary
self.ICON_DIRS = ["/usr/share/icons", f"{USER_HOME}/.icons" "/usr/share/pixmaps"]
self.BASE_THUMBS_PTH = f"{USER_HOME}/.thumbnails"
self.ABS_THUMBS_PTH = f"{self.BASE_THUMBS_PTH}/normal"
self.STEAM_ICONS_PTH = f"{self.BASE_THUMBS_PTH}/steam_icons"
if not path.isdir(self.BASE_THUMBS_PTH):
os.mkdir(self.BASE_THUMBS_PTH)
if not path.isdir(self.ABS_THUMBS_PTH):
os.mkdir(self.ABS_THUMBS_PTH)
if not path.isdir(self.STEAM_ICONS_PTH):
os.mkdir(self.STEAM_ICONS_PTH)
if not os.path.exists(self.DEFAULT_ICONS):
self.DEFAULT_ICONS = f"{USR_APP_CONTEXT}/icons"
self.DEFAULT_ICON = f"{self.DEFAULT_ICONS}/text.png"
CONFIG_FILE = f"{CURRENT_PATH}/../settings.json"
with open(CONFIG_FILE) as f:
settings = json.load(f)
config = settings["config"]
self.container_icon_wh = config["container_icon_wh"]
self.video_icon_wh = config["video_icon_wh"]
self.sys_icon_wh = config["sys_icon_wh"]
# Filters
filters = settings["filters"]
self.fmeshs = tuple(filters["meshs"])
self.fcode = tuple(filters["code"])
self.fvideos = tuple(filters["videos"])
self.foffice = tuple(filters["office"])
self.fimages = tuple(filters["images"])
self.ftext = tuple(filters["text"])
self.fmusic = tuple(filters["music"])
self.fpdf = tuple(filters["pdf"])

View File

@@ -1,12 +0,0 @@
{
"manifest": {
"name": "Thumbnailer",
"author": "ITDominator",
"version": "0.0.1",
"support": "",
"requests": {
"pass_fm_events": "true"
},
"pre_launch": "true"
}
}

View File

@@ -1,59 +0,0 @@
# Python imports
import os
# Lib imports
# Application imports
from plugins.plugin_base import PluginBase
from .icons.controller import IconController
class Plugin(PluginBase):
def __init__(self):
super().__init__()
self.name = "Thumbnailer" # NOTE: Need to remove after establishing private bidirectional 1-1 message bus
# where self.name should not be needed for message comms
# self.path = os.path.dirname(os.path.realpath(__file__))
def run(self):
self.icon_controller = IconController()
self._event_system.subscribe("create-thumbnail", self.create_thumbnail)
def generate_reference_ui_element(self):
...
def create_thumbnail(self, dir, file) -> str:
return self.icon_controller.create_icon(dir, file)
def get_video_icons(self, dir) -> list:
data = []
def get_video_icons(self) -> list:
data = []
fvideos = self.icon_controller.fvideos
vids = [ file for file in os.path.list_dir(dir) if file.lower().endswith(fvideos) ]
for file in vids:
img_hash, hash_img_path = self.create_video_thumbnail(full_path = f"{dir}/{file}", returnHashInstead = True)
data.append([img_hash, hash_img_path])
return data
def get_pixbuf_icon_str_combo(self, dir) -> list:
data = []
for file in os.path.list_dir(dir):
icon = self.icon_controller.create_icon(dir, file).get_pixbuf()
data.append([icon, file])
return data
def get_gtk_icon_str_combo(self, dir) -> list:
data = []
for file in os.path.list_dir(dir):
icon = self.icon_controller.create_icon(dir, file)
data.append([icon, file[0]])
return data

View File

@@ -1,101 +0,0 @@
{
"config":{
"thumbnailer_path":"ffmpegthumbnailer",
"blender_thumbnailer_path":"",
"container_icon_wh":[
128,
128
],
"video_icon_wh":[
128,
64
],
"sys_icon_wh":[
56,
56
],
"steam_cdn_url":"https://steamcdn-a.akamaihd.net/steam/apps/",
"remux_folder_max_disk_usage":"8589934592"
},
"filters":{
"meshs":[
".dae",
".fbx",
".gltf",
".obj",
".stl"
],
"code":[
".cpp",
".css",
".c",
".go",
".html",
".htm",
".java",
".js",
".json",
".lua",
".md",
".py",
".rs",
".toml",
".xml",
".pom"
],
"videos":[
".mkv",
".mp4",
".webm",
".avi",
".mov",
".m4v",
".mpg",
".mpeg",
".wmv",
".flv"
],
"office":[
".doc",
".docx",
".xls",
".xlsx",
".xlt",
".xltx",
".xlm",
".ppt",
".pptx",
".pps",
".ppsx",
".odt",
".rtf"
],
"images":[
".png",
".jpg",
".jpeg",
".gif",
".ico",
".tga",
".webp"
],
"text":[
".txt",
".text",
".sh",
".cfg",
".conf",
".log"
],
"music":[
".psf",
".mp3",
".ogg",
".flac",
".m4a"
],
"pdf":[
".pdf"
]
}
}

View File

@@ -184,11 +184,11 @@ class Plugin(PluginBase):
response = requests.post(self.vqd_link, headers=self.vqd_headers, data=self.vqd_data, timeout=2) response = requests.post(self.vqd_link, headers=self.vqd_headers, data=self.vqd_data, timeout=2)
if response.status_code == 200: if response.status_code == 200:
data = response.content data = response.content
vqd_start_index = data.index(b"vqd=\"") + 5 vqd_start_index = data.index(b"vqd='") + 5
vqd_end_index = data.index(b"\"", vqd_start_index) vqd_end_index = data.index(b"'", vqd_start_index)
self._vqd_attrib = data[vqd_start_index:vqd_end_index].decode("utf-8") self._vqd_attrib = data[vqd_start_index:vqd_end_index].decode("utf-8")
print(f"Translation VQD: {self._vqd_attrib}") print(f"Translation VQD: {self._vqd_attrib}")
else: else:
msg = f"Could not get VQS attribute... Response Code: {response.status_code}" msg = f"Could not get VQS attribute... Response Code: {response.status_code}"
self._translate_to_buffer.set_text(msg) self._translate_to_buffer.set_text(msg)

View File

@@ -111,8 +111,6 @@ class Plugin(PluginBase):
for uri in state.uris: for uri in state.uris:
self.trashman.trash(uri, verbocity) self.trashman.trash(uri, verbocity)
self.trashman.regenerate()
def restore_trash_files(self, widget = None, eve = None, verbocity = False): def restore_trash_files(self, widget = None, eve = None, verbocity = False):
self._event_system.emit("get_current_state") self._event_system.emit("get_current_state")
state = self._fm_state state = self._fm_state

View File

@@ -43,4 +43,4 @@ class Trash(object):
def restore(self, filename, verbose): def restore(self, filename, verbose):
"""Restore a file from trash.""" """Restore a file from trash."""
raise NotImplementedError(_('Backend didnt implement this functionality')) raise NotImplementedError(_('Backend didnt \ implement this functionality'))

View File

@@ -127,7 +127,7 @@ DeletionDate={}
f.write(infofile) f.write(infofile)
f.close() f.close()
# self.regenerate() self.regenerate()
if verbose: if verbose:
sys.stderr.write(_('trashed \'{}\'\n').format(filename)) sys.stderr.write(_('trashed \'{}\'\n').format(filename))

View File

@@ -8,29 +8,12 @@
function main() { function main() {
_STARGET="${1}" cd "$(dirname "")"
_SPATH="${HOME}/.config/solarfm/plugins/youtube_download" echo "Working Dir: " $(pwd)
LINK=`xclip -selection clipboard -o` LINK=`xclip -selection clipboard -o`
cd "${_SPATH}" python "${HOME}/.config/solarfm/plugins/youtube_download/yt_dlp/__main__.py" \
echo "Working Dir: " $(pwd) --write-sub --embed-sub --sub-langs en \
-o "${1}/%(title)s.%(ext)s" "${LINK}"
rm "${_SPATH}/../../cookies.txt"
# Note: Export cookies to file
python "${_SPATH}/yt_dlp/__main__.py" \
--cookies-from-browser firefox --cookies "${_SPATH}/../../cookies.txt"
# Note: Use cookies from browser directly
# python "${_SPATH}/yt_dlp/__main__.py" \
# --cookies-from-browser firefox --write-sub --embed-sub --sub-langs en \
# -o "${_STARGET}/%(title)s.%(ext)s" "${LINK}"
# Note: Download video
python "${_SPATH}/yt_dlp/__main__.py" \
-f "bestvideo[height<=1080][ext=mp4][vcodec^=avc]+bestaudio[ext=m4a]/best[ext=mp4]/best" \
--cookies "${_SPATH}/../../cookies.txt" --write-sub --embed-sub --sub-langs en \
-o "${_STARGET}/%(title)s.%(ext)s" "${LINK}"
} }
main "$@"; main "$@";

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,10 @@
import sys try:
import contextvars # noqa: F401
except Exception:
raise Exception(
f'You are using an unsupported version of Python. Only Python versions 3.7 and above are supported by yt-dlp') # noqa: F541
if sys.version_info < (3, 9): __license__ = 'Public Domain'
raise ImportError(
f'You are using an unsupported version of Python. Only Python versions 3.9 and above are supported by yt-dlp') # noqa: F541
__license__ = 'The Unlicense'
import collections import collections
import getpass import getpass
@@ -12,16 +12,15 @@ import itertools
import optparse import optparse
import os import os
import re import re
import sys
import traceback import traceback
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS, CookieLoadError from .compat import compat_shlex_quote
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS
from .downloader.external import get_external_downloader from .downloader.external import get_external_downloader
from .extractor import list_extractor_classes from .extractor import list_extractor_classes
from .extractor.adobepass import MSO_INFO from .extractor.adobepass import MSO_INFO
from .networking.impersonate import ImpersonateTarget
from .globals import IN_CLI, plugin_dirs
from .options import parseOpts from .options import parseOpts
from .plugins import load_all_plugins as _load_all_plugins
from .postprocessor import ( from .postprocessor import (
FFmpegExtractAudioPP, FFmpegExtractAudioPP,
FFmpegMergerPP, FFmpegMergerPP,
@@ -44,12 +43,12 @@ from .utils import (
GeoUtils, GeoUtils,
PlaylistEntries, PlaylistEntries,
SameFileError, SameFileError,
decodeOption,
download_range_func, download_range_func,
expand_path, expand_path,
float_or_none, float_or_none,
format_field, format_field,
int_or_none, int_or_none,
join_nonempty,
match_filter_func, match_filter_func,
parse_bytes, parse_bytes,
parse_duration, parse_duration,
@@ -58,15 +57,15 @@ from .utils import (
read_stdin, read_stdin,
render_table, render_table,
setproctitle, setproctitle,
shell_quote,
traverse_obj, traverse_obj,
variadic, variadic,
write_string, write_string,
) )
from .utils.networking import std_headers from .utils.networking import std_headers
from .utils._utils import _UnsafeExtensionError
from .YoutubeDL import YoutubeDL from .YoutubeDL import YoutubeDL
_IN_CLI = False
def _exit(status=0, *args): def _exit(status=0, *args):
for msg in args: for msg in args:
@@ -75,16 +74,14 @@ def _exit(status=0, *args):
def get_urls(urls, batchfile, verbose): def get_urls(urls, batchfile, verbose):
""" # Batch file verification
@param verbose -1: quiet, 0: normal, 1: verbose
"""
batch_urls = [] batch_urls = []
if batchfile is not None: if batchfile is not None:
try: try:
batch_urls = read_batch_urls( batch_urls = read_batch_urls(
read_stdin(None if verbose == -1 else 'URLs') if batchfile == '-' read_stdin('URLs') if batchfile == '-'
else open(expand_path(batchfile), encoding='utf-8', errors='ignore')) else open(expand_path(batchfile), encoding='utf-8', errors='ignore'))
if verbose == 1: if verbose:
write_string('[debug] Batch file urls: ' + repr(batch_urls) + '\n') write_string('[debug] Batch file urls: ' + repr(batch_urls) + '\n')
except OSError: except OSError:
_exit(f'ERROR: batch file {batchfile} could not be read') _exit(f'ERROR: batch file {batchfile} could not be read')
@@ -115,9 +112,9 @@ def print_extractor_information(opts, urls):
ie.description(markdown=False, search_examples=_SEARCHES) ie.description(markdown=False, search_examples=_SEARCHES)
for ie in list_extractor_classes(opts.age_limit) if ie.working() and ie.IE_DESC is not False) for ie in list_extractor_classes(opts.age_limit) if ie.working() and ie.IE_DESC is not False)
elif opts.ap_list_mso: elif opts.ap_list_mso:
out = 'Supported TV Providers:\n{}\n'.format(render_table( out = 'Supported TV Providers:\n%s\n' % render_table(
['mso', 'mso name'], ['mso', 'mso name'],
[[mso_id, mso_info['name']] for mso_id, mso_info in MSO_INFO.items()])) [[mso_id, mso_info['name']] for mso_id, mso_info in MSO_INFO.items()])
else: else:
return False return False
write_string(out, out=sys.stdout) write_string(out, out=sys.stdout)
@@ -129,7 +126,7 @@ def set_compat_opts(opts):
if name not in opts.compat_opts: if name not in opts.compat_opts:
return False return False
opts.compat_opts.discard(name) opts.compat_opts.discard(name)
opts.compat_opts.update([f'*{name}']) opts.compat_opts.update(['*%s' % name])
return True return True
def set_default_compat(compat_name, opt_name, default=True, remove_compat=True): def set_default_compat(compat_name, opt_name, default=True, remove_compat=True):
@@ -156,9 +153,6 @@ def set_compat_opts(opts):
opts.embed_infojson = False opts.embed_infojson = False
if 'format-sort' in opts.compat_opts: if 'format-sort' in opts.compat_opts:
opts.format_sort.extend(FormatSorter.ytdl_default) opts.format_sort.extend(FormatSorter.ytdl_default)
elif 'prefer-vp9-sort' in opts.compat_opts:
opts.format_sort.extend(FormatSorter._prefer_vp9_sort)
_video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False) _video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False)
_audio_multistreams_set = set_default_compat('multistreams', 'allow_multiple_audio_streams', False, remove_compat=False) _audio_multistreams_set = set_default_compat('multistreams', 'allow_multiple_audio_streams', False, remove_compat=False)
if _video_multistreams_set is False and _audio_multistreams_set is False: if _video_multistreams_set is False and _audio_multistreams_set is False:
@@ -225,7 +219,7 @@ def validate_options(opts):
validate_minmax(opts.sleep_interval, opts.max_sleep_interval, 'sleep interval') validate_minmax(opts.sleep_interval, opts.max_sleep_interval, 'sleep interval')
if opts.wait_for_video is not None: if opts.wait_for_video is not None:
min_wait, max_wait, *_ = map(parse_duration, [*opts.wait_for_video.split('-', 1), None]) min_wait, max_wait, *_ = map(parse_duration, opts.wait_for_video.split('-', 1) + [None])
validate(min_wait is not None and not (max_wait is None and '-' in opts.wait_for_video), validate(min_wait is not None and not (max_wait is None and '-' in opts.wait_for_video),
'time range to wait for video', opts.wait_for_video) 'time range to wait for video', opts.wait_for_video)
validate_minmax(min_wait, max_wait, 'time range to wait for video') validate_minmax(min_wait, max_wait, 'time range to wait for video')
@@ -236,11 +230,6 @@ def validate_options(opts):
validate_regex('format sorting', f, FormatSorter.regex) validate_regex('format sorting', f, FormatSorter.regex)
# Postprocessor formats # Postprocessor formats
if opts.convertsubtitles == 'none':
opts.convertsubtitles = None
if opts.convertthumbnails == 'none':
opts.convertthumbnails = None
validate_regex('merge output format', opts.merge_output_format, validate_regex('merge output format', opts.merge_output_format,
r'({0})(/({0}))*'.format('|'.join(map(re.escape, FFmpegMergerPP.SUPPORTED_EXTS)))) r'({0})(/({0}))*'.format('|'.join(map(re.escape, FFmpegMergerPP.SUPPORTED_EXTS))))
validate_regex('audio format', opts.audioformat, FFmpegExtractAudioPP.FORMAT_RE) validate_regex('audio format', opts.audioformat, FFmpegExtractAudioPP.FORMAT_RE)
@@ -260,11 +249,9 @@ def validate_options(opts):
elif value in ('inf', 'infinite'): elif value in ('inf', 'infinite'):
return float('inf') return float('inf')
try: try:
int_value = int(value) return int(value)
except (TypeError, ValueError): except (TypeError, ValueError):
validate(False, f'{name} retry count', value) validate(False, f'{name} retry count', value)
validate_positive(f'{name} retry count', int_value)
return int_value
opts.retries = parse_retries('download', opts.retries) opts.retries = parse_retries('download', opts.retries)
opts.fragment_retries = parse_retries('fragment', opts.fragment_retries) opts.fragment_retries = parse_retries('fragment', opts.fragment_retries)
@@ -274,9 +261,9 @@ def validate_options(opts):
# Retry sleep function # Retry sleep function
def parse_sleep_func(expr): def parse_sleep_func(expr):
NUMBER_RE = r'\d+(?:\.\d+)?' NUMBER_RE = r'\d+(?:\.\d+)?'
op, start, limit, step, *_ = (*tuple(re.fullmatch( op, start, limit, step, *_ = tuple(re.fullmatch(
rf'(?:(linear|exp)=)?({NUMBER_RE})(?::({NUMBER_RE})?)?(?::({NUMBER_RE}))?', rf'(?:(linear|exp)=)?({NUMBER_RE})(?::({NUMBER_RE})?)?(?::({NUMBER_RE}))?',
expr.strip()).groups()), None, None) expr.strip()).groups()) + (None, None)
if op == 'exp': if op == 'exp':
return lambda n: min(float(start) * (float(step or 2) ** n), float(limit or 'inf')) return lambda n: min(float(start) * (float(step or 2) ** n), float(limit or 'inf'))
@@ -294,20 +281,18 @@ def validate_options(opts):
raise ValueError(f'invalid {key} retry sleep expression {expr!r}') raise ValueError(f'invalid {key} retry sleep expression {expr!r}')
# Bytes # Bytes
def validate_bytes(name, value, strict_positive=False): def validate_bytes(name, value):
if value is None: if value is None:
return None return None
numeric_limit = parse_bytes(value) numeric_limit = parse_bytes(value)
validate(numeric_limit is not None, name, value) validate(numeric_limit is not None, 'rate limit', value)
if strict_positive:
validate_positive(name, numeric_limit, True)
return numeric_limit return numeric_limit
opts.ratelimit = validate_bytes('rate limit', opts.ratelimit, True) opts.ratelimit = validate_bytes('rate limit', opts.ratelimit)
opts.throttledratelimit = validate_bytes('throttled rate limit', opts.throttledratelimit) opts.throttledratelimit = validate_bytes('throttled rate limit', opts.throttledratelimit)
opts.min_filesize = validate_bytes('min filesize', opts.min_filesize) opts.min_filesize = validate_bytes('min filesize', opts.min_filesize)
opts.max_filesize = validate_bytes('max filesize', opts.max_filesize) opts.max_filesize = validate_bytes('max filesize', opts.max_filesize)
opts.buffersize = validate_bytes('buffer size', opts.buffersize, True) opts.buffersize = validate_bytes('buffer size', opts.buffersize)
opts.http_chunk_size = validate_bytes('http chunk size', opts.http_chunk_size) opts.http_chunk_size = validate_bytes('http chunk size', opts.http_chunk_size)
# Output templates # Output templates
@@ -402,19 +387,16 @@ def validate_options(opts):
f'Supported keyrings are: {", ".join(sorted(SUPPORTED_KEYRINGS))}') f'Supported keyrings are: {", ".join(sorted(SUPPORTED_KEYRINGS))}')
opts.cookiesfrombrowser = (browser_name, profile, keyring, container) opts.cookiesfrombrowser = (browser_name, profile, keyring, container)
if opts.impersonate is not None:
opts.impersonate = ImpersonateTarget.from_str(opts.impersonate.lower())
# MetadataParser # MetadataParser
def metadataparser_actions(f): def metadataparser_actions(f):
if isinstance(f, str): if isinstance(f, str):
cmd = f'--parse-metadata {shell_quote(f)}' cmd = '--parse-metadata %s' % compat_shlex_quote(f)
try: try:
actions = [MetadataFromFieldPP.to_action(f)] actions = [MetadataFromFieldPP.to_action(f)]
except Exception as err: except Exception as err:
raise ValueError(f'{cmd} is invalid; {err}') raise ValueError(f'{cmd} is invalid; {err}')
else: else:
cmd = f'--replace-in-metadata {shell_quote(f)}' cmd = '--replace-in-metadata %s' % ' '.join(map(compat_shlex_quote, f))
actions = ((MetadataParserPP.Actions.REPLACE, x, *f[1:]) for x in f[0].split(',')) actions = ((MetadataParserPP.Actions.REPLACE, x, *f[1:]) for x in f[0].split(','))
for action in actions: for action in actions:
@@ -425,17 +407,13 @@ def validate_options(opts):
yield action yield action
if opts.metafromtitle is not None: if opts.metafromtitle is not None:
opts.parse_metadata.setdefault('pre_process', []).append(f'title:{opts.metafromtitle}') opts.parse_metadata.setdefault('pre_process', []).append('title:%s' % opts.metafromtitle)
opts.parse_metadata = { opts.parse_metadata = {
k: list(itertools.chain(*map(metadataparser_actions, v))) k: list(itertools.chain(*map(metadataparser_actions, v)))
for k, v in opts.parse_metadata.items() for k, v in opts.parse_metadata.items()
} }
# Other options # Other options
opts.plugin_dirs = opts.plugin_dirs
if opts.plugin_dirs is None:
opts.plugin_dirs = ['default']
if opts.playlist_items is not None: if opts.playlist_items is not None:
try: try:
tuple(PlaylistEntries.parse_playlist_items(opts.playlist_items)) tuple(PlaylistEntries.parse_playlist_items(opts.playlist_items))
@@ -482,7 +460,7 @@ def validate_options(opts):
default_downloader = ed.get_basename() default_downloader = ed.get_basename()
for policy in opts.color.values(): for policy in opts.color.values():
if policy not in ('always', 'auto', 'auto-tty', 'no_color', 'no_color-tty', 'never'): if policy not in ('always', 'auto', 'no_color', 'never'):
raise ValueError(f'"{policy}" is not a valid color policy') raise ValueError(f'"{policy}" is not a valid color policy')
warnings, deprecation_warnings = [], [] warnings, deprecation_warnings = [], []
@@ -608,13 +586,6 @@ def validate_options(opts):
if opts.ap_username is not None and opts.ap_password is None: if opts.ap_username is not None and opts.ap_password is None:
opts.ap_password = getpass.getpass('Type TV provider account password and press [Return]: ') opts.ap_password = getpass.getpass('Type TV provider account password and press [Return]: ')
# compat option changes global state destructively; only allow from cli
if 'allow-unsafe-ext' in opts.compat_opts:
warnings.append(
'Using allow-unsafe-ext opens you up to potential attacks. '
'Use with great care!')
_UnsafeExtensionError.sanitize_extension = lambda x, prepend=False: x
return warnings, deprecation_warnings return warnings, deprecation_warnings
@@ -625,7 +596,7 @@ def get_postprocessors(opts):
yield { yield {
'key': 'MetadataParser', 'key': 'MetadataParser',
'actions': actions, 'actions': actions,
'when': when, 'when': when
} }
sponsorblock_query = opts.sponsorblock_mark | opts.sponsorblock_remove sponsorblock_query = opts.sponsorblock_mark | opts.sponsorblock_remove
if sponsorblock_query: if sponsorblock_query:
@@ -633,19 +604,19 @@ def get_postprocessors(opts):
'key': 'SponsorBlock', 'key': 'SponsorBlock',
'categories': sponsorblock_query, 'categories': sponsorblock_query,
'api': opts.sponsorblock_api, 'api': opts.sponsorblock_api,
'when': 'after_filter', 'when': 'after_filter'
} }
if opts.convertsubtitles: if opts.convertsubtitles:
yield { yield {
'key': 'FFmpegSubtitlesConvertor', 'key': 'FFmpegSubtitlesConvertor',
'format': opts.convertsubtitles, 'format': opts.convertsubtitles,
'when': 'before_dl', 'when': 'before_dl'
} }
if opts.convertthumbnails: if opts.convertthumbnails:
yield { yield {
'key': 'FFmpegThumbnailsConvertor', 'key': 'FFmpegThumbnailsConvertor',
'format': opts.convertthumbnails, 'format': opts.convertthumbnails,
'when': 'before_dl', 'when': 'before_dl'
} }
if opts.extractaudio: if opts.extractaudio:
yield { yield {
@@ -670,7 +641,7 @@ def get_postprocessors(opts):
yield { yield {
'key': 'FFmpegEmbedSubtitle', 'key': 'FFmpegEmbedSubtitle',
# already_have_subtitle = True prevents the file from being deleted after embedding # already_have_subtitle = True prevents the file from being deleted after embedding
'already_have_subtitle': opts.writesubtitles and keep_subs, 'already_have_subtitle': opts.writesubtitles and keep_subs
} }
if not opts.writeautomaticsub and keep_subs: if not opts.writeautomaticsub and keep_subs:
opts.writesubtitles = True opts.writesubtitles = True
@@ -683,7 +654,7 @@ def get_postprocessors(opts):
'remove_sponsor_segments': opts.sponsorblock_remove, 'remove_sponsor_segments': opts.sponsorblock_remove,
'remove_ranges': opts.remove_ranges, 'remove_ranges': opts.remove_ranges,
'sponsorblock_chapter_title': opts.sponsorblock_chapter_title, 'sponsorblock_chapter_title': opts.sponsorblock_chapter_title,
'force_keyframes': opts.force_keyframes_at_cuts, 'force_keyframes': opts.force_keyframes_at_cuts
} }
# FFmpegMetadataPP should be run after FFmpegVideoConvertorPP and # FFmpegMetadataPP should be run after FFmpegVideoConvertorPP and
# FFmpegExtractAudioPP as containers before conversion may not support # FFmpegExtractAudioPP as containers before conversion may not support
@@ -717,7 +688,7 @@ def get_postprocessors(opts):
yield { yield {
'key': 'EmbedThumbnail', 'key': 'EmbedThumbnail',
# already_have_thumbnail = True prevents the file from being deleted after embedding # already_have_thumbnail = True prevents the file from being deleted after embedding
'already_have_thumbnail': opts.writethumbnail, 'already_have_thumbnail': opts.writethumbnail
} }
if not opts.writethumbnail: if not opts.writethumbnail:
opts.writethumbnail = True opts.writethumbnail = True
@@ -751,7 +722,7 @@ ParsedOptions = collections.namedtuple('ParsedOptions', ('parser', 'options', 'u
def parse_options(argv=None): def parse_options(argv=None):
"""@returns ParsedOptions(parser, opts, urls, ydl_opts)""" """@returns ParsedOptions(parser, opts, urls, ydl_opts)"""
parser, opts, urls = parseOpts(argv) parser, opts, urls = parseOpts(argv)
urls = get_urls(urls, opts.batchfile, -1 if opts.quiet and not opts.verbose else opts.verbose) urls = get_urls(urls, opts.batchfile, opts.verbose)
set_compat_opts(opts) set_compat_opts(opts)
try: try:
@@ -764,7 +735,7 @@ def parse_options(argv=None):
print_only = bool(opts.forceprint) and all(k not in opts.forceprint for k in POSTPROCESS_WHEN[3:]) print_only = bool(opts.forceprint) and all(k not in opts.forceprint for k in POSTPROCESS_WHEN[3:])
any_getting = any(getattr(opts, k) for k in ( any_getting = any(getattr(opts, k) for k in (
'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename', 'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename',
'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl', 'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl'
)) ))
if opts.quiet is None: if opts.quiet is None:
opts.quiet = any_getting or opts.print_json or bool(opts.forceprint) opts.quiet = any_getting or opts.print_json or bool(opts.forceprint)
@@ -859,7 +830,6 @@ def parse_options(argv=None):
'noprogress': opts.quiet if opts.noprogress is None else opts.noprogress, 'noprogress': opts.quiet if opts.noprogress is None else opts.noprogress,
'progress_with_newline': opts.progress_with_newline, 'progress_with_newline': opts.progress_with_newline,
'progress_template': opts.progress_template, 'progress_template': opts.progress_template,
'progress_delta': opts.progress_delta,
'playliststart': opts.playliststart, 'playliststart': opts.playliststart,
'playlistend': opts.playlistend, 'playlistend': opts.playlistend,
'playlistreverse': opts.playlist_reverse, 'playlistreverse': opts.playlist_reverse,
@@ -888,8 +858,8 @@ def parse_options(argv=None):
'listsubtitles': opts.listsubtitles, 'listsubtitles': opts.listsubtitles,
'subtitlesformat': opts.subtitlesformat, 'subtitlesformat': opts.subtitlesformat,
'subtitleslangs': opts.subtitleslangs, 'subtitleslangs': opts.subtitleslangs,
'matchtitle': opts.matchtitle, 'matchtitle': decodeOption(opts.matchtitle),
'rejecttitle': opts.rejecttitle, 'rejecttitle': decodeOption(opts.rejecttitle),
'max_downloads': opts.max_downloads, 'max_downloads': opts.max_downloads,
'prefer_free_formats': opts.prefer_free_formats, 'prefer_free_formats': opts.prefer_free_formats,
'trim_file_name': opts.trim_file_name, 'trim_file_name': opts.trim_file_name,
@@ -940,7 +910,6 @@ def parse_options(argv=None):
'postprocessors': postprocessors, 'postprocessors': postprocessors,
'fixup': opts.fixup, 'fixup': opts.fixup,
'source_address': opts.source_address, 'source_address': opts.source_address,
'impersonate': opts.impersonate,
'call_home': opts.call_home, 'call_home': opts.call_home,
'sleep_interval_requests': opts.sleep_interval_requests, 'sleep_interval_requests': opts.sleep_interval_requests,
'sleep_interval': opts.sleep_interval, 'sleep_interval': opts.sleep_interval,
@@ -990,11 +959,6 @@ def _real_main(argv=None):
if opts.ffmpeg_location: if opts.ffmpeg_location:
FFmpegPostProcessor._ffmpeg_location.set(opts.ffmpeg_location) FFmpegPostProcessor._ffmpeg_location.set(opts.ffmpeg_location)
# load all plugins into the global lookup
plugin_dirs.value = opts.plugin_dirs
if plugin_dirs.value:
_load_all_plugins()
with YoutubeDL(ydl_opts) as ydl: with YoutubeDL(ydl_opts) as ydl:
pre_process = opts.update_self or opts.rm_cachedir pre_process = opts.update_self or opts.rm_cachedir
actual_use = all_urls or opts.load_info_filename actual_use = all_urls or opts.load_info_filename
@@ -1015,68 +979,11 @@ def _real_main(argv=None):
traceback.print_exc() traceback.print_exc()
ydl._download_retcode = 100 ydl._download_retcode = 100
if opts.list_impersonate_targets:
known_targets = [
# List of simplified targets we know are supported,
# to help users know what dependencies may be required.
(ImpersonateTarget('chrome'), 'curl_cffi'),
(ImpersonateTarget('safari'), 'curl_cffi'),
(ImpersonateTarget('firefox'), 'curl_cffi>=0.10'),
(ImpersonateTarget('edge'), 'curl_cffi'),
]
available_targets = ydl._get_available_impersonate_targets()
def make_row(target, handler):
return [
join_nonempty(target.client.title(), target.version, delim='-') or '-',
join_nonempty((target.os or '').title(), target.os_version, delim='-') or '-',
handler,
]
rows = [make_row(target, handler) for target, handler in available_targets]
for known_target, known_handler in known_targets:
if not any(
known_target in target and known_handler.startswith(handler)
for target, handler in available_targets
):
rows.insert(0, [
ydl._format_out(text, ydl.Styles.SUPPRESS)
for text in make_row(known_target, f'{known_handler} (unavailable)')
])
ydl.to_screen('[info] Available impersonate targets')
ydl.to_stdout(render_table(['Client', 'OS', 'Source'], rows, extra_gap=2, delim='-'))
return
if not actual_use: if not actual_use:
if pre_process: if pre_process:
return ydl._download_retcode return ydl._download_retcode
args = sys.argv[1:] if argv is None else argv ydl.warn_if_short_id(sys.argv[1:] if argv is None else argv)
ydl.warn_if_short_id(args)
# Show a useful error message and wait for keypress if not launched from shell on Windows
if not args and os.name == 'nt' and getattr(sys, 'frozen', False):
import ctypes.wintypes
import msvcrt
kernel32 = ctypes.WinDLL('Kernel32')
buffer = (1 * ctypes.wintypes.DWORD)()
attached_processes = kernel32.GetConsoleProcessList(buffer, 1)
# If we only have a single process attached, then the executable was double clicked
# When using `pyinstaller` with `--onefile`, two processes get attached
is_onefile = hasattr(sys, '_MEIPASS') and os.path.basename(sys._MEIPASS).startswith('_MEI')
if attached_processes == 1 or (is_onefile and attached_processes == 2):
print(parser._generate_error_message(
'Do not double-click the executable, instead call it from a command line.\n'
'Please read the README for further information on how to use yt-dlp: '
'https://github.com/yt-dlp/yt-dlp#readme'))
msvcrt.getch()
_exit(2)
parser.error( parser.error(
'You must provide at least one URL.\n' 'You must provide at least one URL.\n'
'Type yt-dlp --help to see a list of all options.') 'Type yt-dlp --help to see a list of all options.')
@@ -1095,10 +1002,11 @@ def _real_main(argv=None):
def main(argv=None): def main(argv=None):
IN_CLI.value = True global _IN_CLI
_IN_CLI = True
try: try:
_exit(*variadic(_real_main(argv))) _exit(*variadic(_real_main(argv)))
except (CookieLoadError, DownloadError): except DownloadError:
_exit(1) _exit(1)
except SameFileError as e: except SameFileError as e:
_exit(f'ERROR: {e}') _exit(f'ERROR: {e}')
@@ -1116,9 +1024,9 @@ def main(argv=None):
from .extractor import gen_extractors, list_extractors from .extractor import gen_extractors, list_extractors
__all__ = [ __all__ = [
'main',
'YoutubeDL', 'YoutubeDL',
'parse_options',
'gen_extractors', 'gen_extractors',
'list_extractors', 'list_extractors',
'main',
'parse_options',
] ]

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Execute with # Execute with
# $ python3 -m yt_dlp # $ python -m yt_dlp
import sys import sys

View File

@@ -1,6 +1,6 @@
import sys import sys
from PyInstaller.utils.hooks import collect_submodules, collect_data_files from PyInstaller.utils.hooks import collect_submodules
def pycryptodome_module(): def pycryptodome_module():
@@ -10,7 +10,7 @@ def pycryptodome_module():
try: try:
import Crypto # noqa: F401 import Crypto # noqa: F401
print('WARNING: Using Crypto since Cryptodome is not available. ' print('WARNING: Using Crypto since Cryptodome is not available. '
'Install with: python3 -m pip install pycryptodomex', file=sys.stderr) 'Install with: pip install pycryptodomex', file=sys.stderr)
return 'Crypto' return 'Crypto'
except ImportError: except ImportError:
pass pass
@@ -21,16 +21,12 @@ def get_hidden_imports():
yield from ('yt_dlp.compat._legacy', 'yt_dlp.compat._deprecated') yield from ('yt_dlp.compat._legacy', 'yt_dlp.compat._deprecated')
yield from ('yt_dlp.utils._legacy', 'yt_dlp.utils._deprecated') yield from ('yt_dlp.utils._legacy', 'yt_dlp.utils._deprecated')
yield pycryptodome_module() yield pycryptodome_module()
# Only `websockets` is required, others are collected just in case yield from collect_submodules('websockets')
for module in ('websockets', 'requests', 'urllib3'):
yield from collect_submodules(module)
# These are auto-detected, but explicitly add them just in case # These are auto-detected, but explicitly add them just in case
yield from ('mutagen', 'brotli', 'certifi', 'secretstorage', 'curl_cffi') yield from ('mutagen', 'brotli', 'certifi')
hiddenimports = list(get_hidden_imports()) hiddenimports = list(get_hidden_imports())
print(f'Adding imports: {hiddenimports}') print(f'Adding imports: {hiddenimports}')
excludedimports = ['youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts', 'bundle'] excludedimports = ['youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts']
datas = collect_data_files('curl_cffi', includes=['cacert.pem'])

View File

@@ -3,6 +3,7 @@ from math import ceil
from .compat import compat_ord from .compat import compat_ord
from .dependencies import Cryptodome from .dependencies import Cryptodome
from .utils import bytes_to_intlist, intlist_to_bytes
if Cryptodome.AES: if Cryptodome.AES:
def aes_cbc_decrypt_bytes(data, key, iv): def aes_cbc_decrypt_bytes(data, key, iv):
@@ -16,15 +17,15 @@ if Cryptodome.AES:
else: else:
def aes_cbc_decrypt_bytes(data, key, iv): def aes_cbc_decrypt_bytes(data, key, iv):
""" Decrypt bytes with AES-CBC using native implementation since pycryptodome is unavailable """ """ Decrypt bytes with AES-CBC using native implementation since pycryptodome is unavailable """
return bytes(aes_cbc_decrypt(*map(list, (data, key, iv)))) return intlist_to_bytes(aes_cbc_decrypt(*map(bytes_to_intlist, (data, key, iv))))
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce): def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
""" Decrypt bytes with AES-GCM using native implementation since pycryptodome is unavailable """ """ Decrypt bytes with AES-GCM using native implementation since pycryptodome is unavailable """
return bytes(aes_gcm_decrypt_and_verify(*map(list, (data, key, tag, nonce)))) return intlist_to_bytes(aes_gcm_decrypt_and_verify(*map(bytes_to_intlist, (data, key, tag, nonce))))
def aes_cbc_encrypt_bytes(data, key, iv, **kwargs): def aes_cbc_encrypt_bytes(data, key, iv, **kwargs):
return bytes(aes_cbc_encrypt(*map(list, (data, key, iv)), **kwargs)) return intlist_to_bytes(aes_cbc_encrypt(*map(bytes_to_intlist, (data, key, iv)), **kwargs))
BLOCK_SIZE_BYTES = 16 BLOCK_SIZE_BYTES = 16
@@ -67,7 +68,7 @@ def pad_block(block, padding_mode):
raise NotImplementedError(f'Padding mode {padding_mode} is not implemented') raise NotImplementedError(f'Padding mode {padding_mode} is not implemented')
if padding_mode == 'iso7816' and padding_size: if padding_mode == 'iso7816' and padding_size:
block = [*block, 0x80] # NB: += mutates list block = block + [0x80] # NB: += mutates list
padding_size -= 1 padding_size -= 1
return block + [PADDING_BYTE[padding_mode]] * padding_size return block + [PADDING_BYTE[padding_mode]] * padding_size
@@ -83,7 +84,7 @@ def aes_ecb_encrypt(data, key, iv=None):
@returns {int[]} encrypted data @returns {int[]} encrypted data
""" """
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = ceil(len(data) / BLOCK_SIZE_BYTES) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
encrypted_data = [] encrypted_data = []
for i in range(block_count): for i in range(block_count):
@@ -103,13 +104,15 @@ def aes_ecb_decrypt(data, key, iv=None):
@returns {int[]} decrypted data @returns {int[]} decrypted data
""" """
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = ceil(len(data) / BLOCK_SIZE_BYTES) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
encrypted_data = [] encrypted_data = []
for i in range(block_count): for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES] block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
encrypted_data += aes_decrypt(block, expanded_key) encrypted_data += aes_decrypt(block, expanded_key)
return encrypted_data[:len(data)] encrypted_data = encrypted_data[:len(data)]
return encrypted_data
def aes_ctr_decrypt(data, key, iv): def aes_ctr_decrypt(data, key, iv):
@@ -134,7 +137,7 @@ def aes_ctr_encrypt(data, key, iv):
@returns {int[]} encrypted data @returns {int[]} encrypted data
""" """
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = ceil(len(data) / BLOCK_SIZE_BYTES) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
counter = iter_vector(iv) counter = iter_vector(iv)
encrypted_data = [] encrypted_data = []
@@ -145,7 +148,9 @@ def aes_ctr_encrypt(data, key, iv):
cipher_counter_block = aes_encrypt(counter_block, expanded_key) cipher_counter_block = aes_encrypt(counter_block, expanded_key)
encrypted_data += xor(block, cipher_counter_block) encrypted_data += xor(block, cipher_counter_block)
return encrypted_data[:len(data)] encrypted_data = encrypted_data[:len(data)]
return encrypted_data
def aes_cbc_decrypt(data, key, iv): def aes_cbc_decrypt(data, key, iv):
@@ -158,7 +163,7 @@ def aes_cbc_decrypt(data, key, iv):
@returns {int[]} decrypted data @returns {int[]} decrypted data
""" """
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = ceil(len(data) / BLOCK_SIZE_BYTES) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
decrypted_data = [] decrypted_data = []
previous_cipher_block = iv previous_cipher_block = iv
@@ -169,7 +174,9 @@ def aes_cbc_decrypt(data, key, iv):
decrypted_block = aes_decrypt(block, expanded_key) decrypted_block = aes_decrypt(block, expanded_key)
decrypted_data += xor(decrypted_block, previous_cipher_block) decrypted_data += xor(decrypted_block, previous_cipher_block)
previous_cipher_block = block previous_cipher_block = block
return decrypted_data[:len(data)] decrypted_data = decrypted_data[:len(data)]
return decrypted_data
def aes_cbc_encrypt(data, key, iv, *, padding_mode='pkcs7'): def aes_cbc_encrypt(data, key, iv, *, padding_mode='pkcs7'):
@@ -183,7 +190,7 @@ def aes_cbc_encrypt(data, key, iv, *, padding_mode='pkcs7'):
@returns {int[]} encrypted data @returns {int[]} encrypted data
""" """
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = ceil(len(data) / BLOCK_SIZE_BYTES) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
encrypted_data = [] encrypted_data = []
previous_cipher_block = iv previous_cipher_block = iv
@@ -217,10 +224,10 @@ def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
hash_subkey = aes_encrypt([0] * BLOCK_SIZE_BYTES, key_expansion(key)) hash_subkey = aes_encrypt([0] * BLOCK_SIZE_BYTES, key_expansion(key))
if len(nonce) == 12: if len(nonce) == 12:
j0 = [*nonce, 0, 0, 0, 1] j0 = nonce + [0, 0, 0, 1]
else: else:
fill = (BLOCK_SIZE_BYTES - (len(nonce) % BLOCK_SIZE_BYTES)) % BLOCK_SIZE_BYTES + 8 fill = (BLOCK_SIZE_BYTES - (len(nonce) % BLOCK_SIZE_BYTES)) % BLOCK_SIZE_BYTES + 8
ghash_in = nonce + [0] * fill + list((8 * len(nonce)).to_bytes(8, 'big')) ghash_in = nonce + [0] * fill + bytes_to_intlist((8 * len(nonce)).to_bytes(8, 'big'))
j0 = ghash(hash_subkey, ghash_in) j0 = ghash(hash_subkey, ghash_in)
# TODO: add nonce support to aes_ctr_decrypt # TODO: add nonce support to aes_ctr_decrypt
@@ -229,17 +236,17 @@ def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
iv_ctr = inc(j0) iv_ctr = inc(j0)
decrypted_data = aes_ctr_decrypt(data, key, iv_ctr + [0] * (BLOCK_SIZE_BYTES - len(iv_ctr))) decrypted_data = aes_ctr_decrypt(data, key, iv_ctr + [0] * (BLOCK_SIZE_BYTES - len(iv_ctr)))
pad_len = (BLOCK_SIZE_BYTES - (len(data) % BLOCK_SIZE_BYTES)) % BLOCK_SIZE_BYTES pad_len = len(data) // 16 * 16
s_tag = ghash( s_tag = ghash(
hash_subkey, hash_subkey,
data data
+ [0] * pad_len # pad + [0] * (BLOCK_SIZE_BYTES - len(data) + pad_len) # pad
+ list((0 * 8).to_bytes(8, 'big') # length of associated data + bytes_to_intlist((0 * 8).to_bytes(8, 'big') # length of associated data
+ ((len(data) * 8).to_bytes(8, 'big'))), # length of data + ((len(data) * 8).to_bytes(8, 'big'))) # length of data
) )
if tag != aes_ctr_encrypt(s_tag, key, j0): if tag != aes_ctr_encrypt(s_tag, key, j0):
raise ValueError('Mismatching authentication tag') raise ValueError("Mismatching authentication tag")
return decrypted_data return decrypted_data
@@ -281,7 +288,9 @@ def aes_decrypt(data, expanded_key):
data = list(iter_mix_columns(data, MIX_COLUMN_MATRIX_INV)) data = list(iter_mix_columns(data, MIX_COLUMN_MATRIX_INV))
data = shift_rows_inv(data) data = shift_rows_inv(data)
data = sub_bytes_inv(data) data = sub_bytes_inv(data)
return xor(data, expanded_key[:BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[:BLOCK_SIZE_BYTES])
return data
def aes_decrypt_text(data, password, key_size_bytes): def aes_decrypt_text(data, password, key_size_bytes):
@@ -299,8 +308,8 @@ def aes_decrypt_text(data, password, key_size_bytes):
""" """
NONCE_LENGTH_BYTES = 8 NONCE_LENGTH_BYTES = 8
data = list(base64.b64decode(data)) data = bytes_to_intlist(base64.b64decode(data))
password = list(password.encode()) password = bytes_to_intlist(password.encode())
key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password)) key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password))
key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES) key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES)
@@ -309,7 +318,9 @@ def aes_decrypt_text(data, password, key_size_bytes):
cipher = data[NONCE_LENGTH_BYTES:] cipher = data[NONCE_LENGTH_BYTES:]
decrypted_data = aes_ctr_decrypt(cipher, key, nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES)) decrypted_data = aes_ctr_decrypt(cipher, key, nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES))
return bytes(decrypted_data) plaintext = intlist_to_bytes(decrypted_data)
return plaintext
RCON = (0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36) RCON = (0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36)
@@ -417,7 +428,9 @@ def key_expansion(data):
for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0): for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0):
temp = data[-4:] temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes]) data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
return data[:expanded_key_size_bytes] data = data[:expanded_key_size_bytes]
return data
def iter_vector(iv): def iter_vector(iv):
@@ -498,7 +511,7 @@ def block_product(block_x, block_y):
# NIST SP 800-38D, Algorithm 1 # NIST SP 800-38D, Algorithm 1
if len(block_x) != BLOCK_SIZE_BYTES or len(block_y) != BLOCK_SIZE_BYTES: if len(block_x) != BLOCK_SIZE_BYTES or len(block_y) != BLOCK_SIZE_BYTES:
raise ValueError(f'Length of blocks need to be {BLOCK_SIZE_BYTES} bytes') raise ValueError("Length of blocks need to be %d bytes" % BLOCK_SIZE_BYTES)
block_r = [0xE1] + [0] * (BLOCK_SIZE_BYTES - 1) block_r = [0xE1] + [0] * (BLOCK_SIZE_BYTES - 1)
block_v = block_y[:] block_v = block_y[:]
@@ -521,7 +534,7 @@ def ghash(subkey, data):
# NIST SP 800-38D, Algorithm 2 # NIST SP 800-38D, Algorithm 2
if len(data) % BLOCK_SIZE_BYTES: if len(data) % BLOCK_SIZE_BYTES:
raise ValueError(f'Length of data should be {BLOCK_SIZE_BYTES} bytes') raise ValueError("Length of data should be %d bytes" % BLOCK_SIZE_BYTES)
last_y = [0] * BLOCK_SIZE_BYTES last_y = [0] * BLOCK_SIZE_BYTES
for i in range(0, len(data), BLOCK_SIZE_BYTES): for i in range(0, len(data), BLOCK_SIZE_BYTES):
@@ -534,17 +547,19 @@ def ghash(subkey, data):
__all__ = [ __all__ = [
'aes_cbc_decrypt', 'aes_cbc_decrypt',
'aes_cbc_decrypt_bytes', 'aes_cbc_decrypt_bytes',
'aes_cbc_encrypt',
'aes_cbc_encrypt_bytes',
'aes_ctr_decrypt', 'aes_ctr_decrypt',
'aes_ctr_encrypt',
'aes_decrypt',
'aes_decrypt_text', 'aes_decrypt_text',
'aes_decrypt',
'aes_ecb_decrypt', 'aes_ecb_decrypt',
'aes_ecb_encrypt',
'aes_encrypt',
'aes_gcm_decrypt_and_verify', 'aes_gcm_decrypt_and_verify',
'aes_gcm_decrypt_and_verify_bytes', 'aes_gcm_decrypt_and_verify_bytes',
'aes_cbc_encrypt',
'aes_cbc_encrypt_bytes',
'aes_ctr_encrypt',
'aes_ecb_encrypt',
'aes_encrypt',
'key_expansion', 'key_expansion',
'pad_block', 'pad_block',
'pkcs7_padding', 'pkcs7_padding',

View File

@@ -81,10 +81,10 @@ class Cache:
cachedir = self._get_root_dir() cachedir = self._get_root_dir()
if not any((term in cachedir) for term in ('cache', 'tmp')): if not any((term in cachedir) for term in ('cache', 'tmp')):
raise Exception(f'Not removing directory {cachedir} - this does not look like a cache dir') raise Exception('Not removing directory %s - this does not look like a cache dir' % cachedir)
self._ydl.to_screen( self._ydl.to_screen(
f'Removing cache dir {cachedir} .', skip_eol=True) 'Removing cache dir %s .' % cachedir, skip_eol=True)
if os.path.exists(cachedir): if os.path.exists(cachedir):
self._ydl.to_screen('.', skip_eol=True) self._ydl.to_screen('.', skip_eol=True)
shutil.rmtree(cachedir) shutil.rmtree(cachedir)

View File

@@ -0,0 +1,5 @@
import warnings
warnings.warn(DeprecationWarning(f'{__name__} is deprecated'))
casefold = str.casefold

View File

@@ -1,4 +1,5 @@
import os import os
import sys
import xml.etree.ElementTree as etree import xml.etree.ElementTree as etree
from .compat_utils import passthrough_module from .compat_utils import passthrough_module
@@ -23,14 +24,36 @@ def compat_etree_fromstring(text):
return etree.XML(text, parser=etree.XMLParser(target=_TreeBuilder())) return etree.XML(text, parser=etree.XMLParser(target=_TreeBuilder()))
compat_os_name = os._name if os.name == 'java' else os.name
if compat_os_name == 'nt':
def compat_shlex_quote(s):
import re
return s if re.match(r'^[-_\w./]+$', s) else '"%s"' % s.replace('"', '\\"')
else:
from shlex import quote as compat_shlex_quote # noqa: F401
def compat_ord(c): def compat_ord(c):
return c if isinstance(c, int) else ord(c) return c if isinstance(c, int) else ord(c)
if compat_os_name == 'nt' and sys.version_info < (3, 8):
# os.path.realpath on Windows does not follow symbolic links
# prior to Python 3.8 (see https://bugs.python.org/issue9949)
def compat_realpath(path):
while os.path.islink(path):
path = os.path.abspath(os.readlink(path))
return os.path.realpath(path)
else:
compat_realpath = os.path.realpath
# Python 3.8+ does not honor %HOME% on windows, but this breaks compatibility with youtube-dl # Python 3.8+ does not honor %HOME% on windows, but this breaks compatibility with youtube-dl
# See https://github.com/yt-dlp/yt-dlp/issues/792 # See https://github.com/yt-dlp/yt-dlp/issues/792
# https://docs.python.org/3/library/os.path.html#os.path.expanduser # https://docs.python.org/3/library/os.path.html#os.path.expanduser
if os.name in ('nt', 'ce'): if compat_os_name in ('nt', 'ce'):
def compat_expanduser(path): def compat_expanduser(path):
HOME = os.environ.get('HOME') HOME = os.environ.get('HOME')
if not HOME: if not HOME:

View File

@@ -8,14 +8,16 @@ passthrough_module(__name__, '.._legacy', callback=lambda attr: warnings.warn(
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=6)) DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=6))
del passthrough_module del passthrough_module
import functools # noqa: F401 import base64
import os import urllib.error
import urllib.parse
compat_str = str
compat_os_name = os.name compat_b64decode = base64.b64decode
compat_realpath = os.path.realpath
compat_urlparse = urllib.parse
def compat_shlex_quote(s): compat_parse_qs = urllib.parse.parse_qs
from ..utils import shell_quote compat_urllib_parse_unquote = urllib.parse.unquote
return shell_quote(s) compat_urllib_parse_urlencode = urllib.parse.urlencode
compat_urllib_parse_urlparse = urllib.parse.urlparse

View File

@@ -30,12 +30,11 @@ from asyncio import run as compat_asyncio_run # noqa: F401
from re import Pattern as compat_Pattern # noqa: F401 from re import Pattern as compat_Pattern # noqa: F401
from re import match as compat_Match # noqa: F401 from re import match as compat_Match # noqa: F401
from . import compat_expanduser, compat_HTMLParseError from . import compat_expanduser, compat_HTMLParseError, compat_realpath
from .compat_utils import passthrough_module from .compat_utils import passthrough_module
from ..dependencies import brotli as compat_brotli # noqa: F401 from ..dependencies import brotli as compat_brotli # noqa: F401
from ..dependencies import websockets as compat_websockets # noqa: F401 from ..dependencies import websockets as compat_websockets # noqa: F401
from ..dependencies.Cryptodome import AES as compat_pycrypto_AES # noqa: F401 from ..dependencies.Cryptodome import AES as compat_pycrypto_AES # noqa: F401
from ..networking.exceptions import HTTPError as compat_HTTPError
passthrough_module(__name__, '...utils', ('WINDOWS_VT_MODE', 'windows_enable_vt_mode')) passthrough_module(__name__, '...utils', ('WINDOWS_VT_MODE', 'windows_enable_vt_mode'))
@@ -71,6 +70,7 @@ compat_html_parser_HTMLParseError = compat_HTMLParseError
compat_HTMLParser = compat_html_parser_HTMLParser = html.parser.HTMLParser compat_HTMLParser = compat_html_parser_HTMLParser = html.parser.HTMLParser
compat_http_client = http.client compat_http_client = http.client
compat_http_server = http.server compat_http_server = http.server
compat_HTTPError = urllib.error.HTTPError
compat_input = input compat_input = input
compat_integer_types = (int, ) compat_integer_types = (int, )
compat_itertools_count = itertools.count compat_itertools_count = itertools.count
@@ -78,7 +78,7 @@ compat_kwargs = lambda kwargs: kwargs
compat_map = map compat_map = map
compat_numeric_types = (int, float, complex) compat_numeric_types = (int, float, complex)
compat_os_path_expanduser = compat_expanduser compat_os_path_expanduser = compat_expanduser
compat_os_path_realpath = os.path.realpath compat_os_path_realpath = compat_realpath
compat_print = print compat_print = print
compat_shlex_split = shlex.split compat_shlex_split = shlex.split
compat_socket_create_connection = socket.create_connection compat_socket_create_connection = socket.create_connection
@@ -88,7 +88,7 @@ compat_struct_unpack = struct.unpack
compat_subprocess_get_DEVNULL = lambda: subprocess.DEVNULL compat_subprocess_get_DEVNULL = lambda: subprocess.DEVNULL
compat_tokenize_tokenize = tokenize.tokenize compat_tokenize_tokenize = tokenize.tokenize
compat_urllib_error = urllib.error compat_urllib_error = urllib.error
compat_urllib_HTTPError = compat_HTTPError compat_urllib_HTTPError = urllib.error.HTTPError
compat_urllib_parse = urllib.parse compat_urllib_parse = urllib.parse
compat_urllib_parse_parse_qs = urllib.parse.parse_qs compat_urllib_parse_parse_qs = urllib.parse.parse_qs
compat_urllib_parse_quote = urllib.parse.quote compat_urllib_parse_quote = urllib.parse.quote
@@ -104,12 +104,5 @@ compat_xml_parse_error = compat_xml_etree_ElementTree_ParseError = etree.ParseEr
compat_xpath = lambda xpath: xpath compat_xpath = lambda xpath: xpath
compat_zip = zip compat_zip = zip
workaround_optparse_bug9161 = lambda: None workaround_optparse_bug9161 = lambda: None
compat_str = str
compat_b64decode = base64.b64decode
compat_urlparse = urllib.parse
compat_parse_qs = urllib.parse.parse_qs
compat_urllib_parse_unquote = urllib.parse.unquote
compat_urllib_parse_urlencode = urllib.parse.urlencode
compat_urllib_parse_urlparse = urllib.parse.urlparse
legacy = [] legacy = []

View File

@@ -15,7 +15,7 @@ def get_package_info(module):
name=getattr(module, '_yt_dlp__identifier', module.__name__), name=getattr(module, '_yt_dlp__identifier', module.__name__),
version=str(next(filter(None, ( version=str(next(filter(None, (
getattr(module, attr, None) getattr(module, attr, None)
for attr in ('_yt_dlp__version', '__version__', 'version_string', 'version') for attr in ('__version__', 'version_string', 'version')
)), None))) )), None)))
@@ -57,7 +57,7 @@ def passthrough_module(parent, child, allowed_attributes=(..., ), *, callback=la
callback(attr) callback(attr)
return ret return ret
@functools.cache @functools.lru_cache(maxsize=None)
def from_child(attr): def from_child(attr):
nonlocal child nonlocal child
if attr not in allowed_attributes: if attr not in allowed_attributes:

View File

@@ -0,0 +1,26 @@
# flake8: noqa: F405
from functools import * # noqa: F403
from .compat_utils import passthrough_module
passthrough_module(__name__, 'functools')
del passthrough_module
try:
cache # >= 3.9
except NameError:
cache = lru_cache(maxsize=None)
try:
cached_property # >= 3.8
except NameError:
class cached_property:
def __init__(self, func):
update_wrapper(self, func)
self.func = func
def __get__(self, instance, _):
if instance is None:
return self
setattr(instance, self.func.__name__, self.func(instance))
return getattr(instance, self.func.__name__)

View File

@@ -1,22 +1,16 @@
tests = {
'webp': lambda h: h[0:4] == b'RIFF' and h[8:] == b'WEBP',
'png': lambda h: h[:8] == b'\211PNG\r\n\032\n',
'jpeg': lambda h: h[6:10] in (b'JFIF', b'Exif'),
'gif': lambda h: h[:6] in (b'GIF87a', b'GIF89a'),
}
def what(file=None, h=None): def what(file=None, h=None):
"""Detect format of image (Currently supports jpeg, png, webp, gif only) """Detect format of image (Currently supports jpeg, png, webp, gif only)
Ref: https://github.com/python/cpython/blob/3.11/Lib/imghdr.py Ref: https://github.com/python/cpython/blob/3.10/Lib/imghdr.py
Ref: https://www.w3.org/Graphics/JPEG/itu-t81.pdf
""" """
if h is None: if h is None:
with open(file, 'rb') as f: with open(file, 'rb') as f:
h = f.read(12) h = f.read(12)
return next((type_ for type_, test in tests.items() if test(h)), None)
if h.startswith(b'RIFF') and h.startswith(b'WEBP', 8):
return 'webp'
if h.startswith(b'\x89PNG'):
return 'png'
if h.startswith(b'\xFF\xD8\xFF'):
return 'jpeg'
if h.startswith(b'GIF'):
return 'gif'
return None

View File

@@ -1,7 +1,7 @@
# flake8: noqa: F405 # flake8: noqa: F405
from urllib import * # noqa: F403 from urllib import * # noqa: F403
del request # noqa: F821 del request
from . import request # noqa: F401 from . import request # noqa: F401
from ..compat_utils import passthrough_module from ..compat_utils import passthrough_module

View File

@@ -7,13 +7,13 @@ passthrough_module(__name__, 'urllib.request')
del passthrough_module del passthrough_module
import os from .. import compat_os_name
if os.name == 'nt': if compat_os_name == 'nt':
# On older Python versions, proxies are extracted from Windows registry erroneously. [1] # On older python versions, proxies are extracted from Windows registry erroneously. [1]
# If the https proxy in the registry does not have a scheme, urllib will incorrectly add https:// to it. [2] # If the https proxy in the registry does not have a scheme, urllib will incorrectly add https:// to it. [2]
# It is unlikely that the user has actually set it to be https, so we should be fine to safely downgrade # It is unlikely that the user has actually set it to be https, so we should be fine to safely downgrade
# it to http on these older Python versions to avoid issues # it to http on these older python versions to avoid issues
# This also applies for ftp proxy type, as ftp:// proxy scheme is not supported. # This also applies for ftp proxy type, as ftp:// proxy scheme is not supported.
# 1: https://github.com/python/cpython/issues/86793 # 1: https://github.com/python/cpython/issues/86793
# 2: https://github.com/python/cpython/blob/51f1ae5ceb0673316c4e4b0175384e892e33cc6e/Lib/urllib/request.py#L2683-L2698 # 2: https://github.com/python/cpython/blob/51f1ae5ceb0673316c4e4b0175384e892e33cc6e/Lib/urllib/request.py#L2683-L2698
@@ -37,4 +37,4 @@ if os.name == 'nt':
def getproxies(): def getproxies():
return getproxies_environment() or getproxies_registry_patched() return getproxies_environment() or getproxies_registry_patched()
del os del compat_os_name

View File

@@ -1,10 +1,6 @@
import base64 import base64
import collections import collections
import contextlib import contextlib
import datetime as dt
import functools
import glob
import hashlib
import http.cookiejar import http.cookiejar
import http.cookies import http.cookies
import io import io
@@ -18,13 +14,16 @@ import sys
import tempfile import tempfile
import time import time
import urllib.request import urllib.request
from datetime import datetime, timedelta, timezone
from enum import Enum, auto from enum import Enum, auto
from hashlib import pbkdf2_hmac
from .aes import ( from .aes import (
aes_cbc_decrypt_bytes, aes_cbc_decrypt_bytes,
aes_gcm_decrypt_and_verify_bytes, aes_gcm_decrypt_and_verify_bytes,
unpad_pkcs7, unpad_pkcs7,
) )
from .compat import functools
from .dependencies import ( from .dependencies import (
_SECRETSTORAGE_UNAVAILABLE_REASON, _SECRETSTORAGE_UNAVAILABLE_REASON,
secretstorage, secretstorage,
@@ -32,8 +31,6 @@ from .dependencies import (
) )
from .minicurses import MultilinePrinter, QuietMultilinePrinter from .minicurses import MultilinePrinter, QuietMultilinePrinter
from .utils import ( from .utils import (
DownloadError,
YoutubeDLError,
Popen, Popen,
error_to_str, error_to_str,
expand_path, expand_path,
@@ -46,7 +43,7 @@ from .utils import (
from .utils._utils import _YDLLogger from .utils._utils import _YDLLogger
from .utils.networking import normalize_url from .utils.networking import normalize_url
CHROMIUM_BASED_BROWSERS = {'brave', 'chrome', 'chromium', 'edge', 'opera', 'vivaldi', 'whale'} CHROMIUM_BASED_BROWSERS = {'brave', 'chrome', 'chromium', 'edge', 'opera', 'vivaldi'}
SUPPORTED_BROWSERS = CHROMIUM_BASED_BROWSERS | {'firefox', 'safari'} SUPPORTED_BROWSERS = CHROMIUM_BASED_BROWSERS | {'firefox', 'safari'}
@@ -86,31 +83,24 @@ def _create_progress_bar(logger):
return printer return printer
class CookieLoadError(YoutubeDLError):
pass
def load_cookies(cookie_file, browser_specification, ydl): def load_cookies(cookie_file, browser_specification, ydl):
try: cookie_jars = []
cookie_jars = [] if browser_specification is not None:
if browser_specification is not None: browser_name, profile, keyring, container = _parse_browser_specification(*browser_specification)
browser_name, profile, keyring, container = _parse_browser_specification(*browser_specification) cookie_jars.append(
cookie_jars.append( extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl), keyring=keyring, container=container))
extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl), keyring=keyring, container=container))
if cookie_file is not None: if cookie_file is not None:
is_filename = is_path_like(cookie_file) is_filename = is_path_like(cookie_file)
if is_filename: if is_filename:
cookie_file = expand_path(cookie_file) cookie_file = expand_path(cookie_file)
jar = YoutubeDLCookieJar(cookie_file) jar = YoutubeDLCookieJar(cookie_file)
if not is_filename or os.access(cookie_file, os.R_OK): if not is_filename or os.access(cookie_file, os.R_OK):
jar.load() jar.load()
cookie_jars.append(jar) cookie_jars.append(jar)
return _merge_cookie_jars(cookie_jars) return _merge_cookie_jars(cookie_jars)
except Exception:
raise CookieLoadError('failed to load cookies')
def extract_cookies_from_browser(browser_name, profile=None, logger=YDLLogger(), *, keyring=None, container=None): def extract_cookies_from_browser(browser_name, profile=None, logger=YDLLogger(), *, keyring=None, container=None):
@@ -128,18 +118,17 @@ def _extract_firefox_cookies(profile, container, logger):
logger.info('Extracting cookies from firefox') logger.info('Extracting cookies from firefox')
if not sqlite3: if not sqlite3:
logger.warning('Cannot extract cookies from firefox without sqlite3 support. ' logger.warning('Cannot extract cookies from firefox without sqlite3 support. '
'Please use a Python interpreter compiled with sqlite3 support') 'Please use a python interpreter compiled with sqlite3 support')
return YoutubeDLCookieJar() return YoutubeDLCookieJar()
if profile is None: if profile is None:
search_roots = list(_firefox_browser_dirs()) search_root = _firefox_browser_dir()
elif _is_path(profile): elif _is_path(profile):
search_roots = [profile] search_root = profile
else: else:
search_roots = [os.path.join(path, profile) for path in _firefox_browser_dirs()] search_root = os.path.join(_firefox_browser_dir(), profile)
search_root = ', '.join(map(repr, search_roots))
cookie_database_path = _newest(_firefox_cookie_dbs(search_roots)) cookie_database_path = _find_most_recently_used_file(search_root, 'cookies.sqlite', logger)
if cookie_database_path is None: if cookie_database_path is None:
raise FileNotFoundError(f'could not find firefox cookies database in {search_root}') raise FileNotFoundError(f'could not find firefox cookies database in {search_root}')
logger.debug(f'Extracting cookies from: "{cookie_database_path}"') logger.debug(f'Extracting cookies from: "{cookie_database_path}"')
@@ -153,7 +142,7 @@ def _extract_firefox_cookies(profile, container, logger):
identities = json.load(containers).get('identities', []) identities = json.load(containers).get('identities', [])
container_id = next((context.get('userContextId') for context in identities if container in ( container_id = next((context.get('userContextId') for context in identities if container in (
context.get('name'), context.get('name'),
try_call(lambda: re.fullmatch(r'userContext([^\.]+)\.label', context['l10nID']).group()), try_call(lambda: re.fullmatch(r'userContext([^\.]+)\.label', context['l10nID']).group())
)), None) )), None)
if not isinstance(container_id, int): if not isinstance(container_id, int):
raise ValueError(f'could not find firefox container "{container}" in containers.json') raise ValueError(f'could not find firefox container "{container}" in containers.json')
@@ -193,28 +182,12 @@ def _extract_firefox_cookies(profile, container, logger):
cursor.connection.close() cursor.connection.close()
def _firefox_browser_dirs(): def _firefox_browser_dir():
if sys.platform in ('cygwin', 'win32'): if sys.platform in ('cygwin', 'win32'):
yield from map(os.path.expandvars, ( return os.path.expandvars(R'%APPDATA%\Mozilla\Firefox\Profiles')
R'%APPDATA%\Mozilla\Firefox\Profiles',
R'%LOCALAPPDATA%\Packages\Mozilla.Firefox_n80bbvh6b1yt2\LocalCache\Roaming\Mozilla\Firefox\Profiles',
))
elif sys.platform == 'darwin': elif sys.platform == 'darwin':
yield os.path.expanduser('~/Library/Application Support/Firefox/Profiles') return os.path.expanduser('~/Library/Application Support/Firefox')
return os.path.expanduser('~/.mozilla/firefox')
else:
yield from map(os.path.expanduser, (
'~/.mozilla/firefox',
'~/snap/firefox/common/.mozilla/firefox',
'~/.var/app/org.mozilla.firefox/.mozilla/firefox',
))
def _firefox_cookie_dbs(roots):
for root in map(os.path.abspath, roots):
for pattern in ('', '*/', 'Profiles/*/'):
yield from glob.iglob(os.path.join(root, pattern, 'cookies.sqlite'))
def _get_chromium_based_browser_settings(browser_name): def _get_chromium_based_browser_settings(browser_name):
@@ -229,7 +202,6 @@ def _get_chromium_based_browser_settings(browser_name):
'edge': os.path.join(appdata_local, R'Microsoft\Edge\User Data'), 'edge': os.path.join(appdata_local, R'Microsoft\Edge\User Data'),
'opera': os.path.join(appdata_roaming, R'Opera Software\Opera Stable'), 'opera': os.path.join(appdata_roaming, R'Opera Software\Opera Stable'),
'vivaldi': os.path.join(appdata_local, R'Vivaldi\User Data'), 'vivaldi': os.path.join(appdata_local, R'Vivaldi\User Data'),
'whale': os.path.join(appdata_local, R'Naver\Naver Whale\User Data'),
}[browser_name] }[browser_name]
elif sys.platform == 'darwin': elif sys.platform == 'darwin':
@@ -241,7 +213,6 @@ def _get_chromium_based_browser_settings(browser_name):
'edge': os.path.join(appdata, 'Microsoft Edge'), 'edge': os.path.join(appdata, 'Microsoft Edge'),
'opera': os.path.join(appdata, 'com.operasoftware.Opera'), 'opera': os.path.join(appdata, 'com.operasoftware.Opera'),
'vivaldi': os.path.join(appdata, 'Vivaldi'), 'vivaldi': os.path.join(appdata, 'Vivaldi'),
'whale': os.path.join(appdata, 'Naver/Whale'),
}[browser_name] }[browser_name]
else: else:
@@ -253,7 +224,6 @@ def _get_chromium_based_browser_settings(browser_name):
'edge': os.path.join(config, 'microsoft-edge'), 'edge': os.path.join(config, 'microsoft-edge'),
'opera': os.path.join(config, 'opera'), 'opera': os.path.join(config, 'opera'),
'vivaldi': os.path.join(config, 'vivaldi'), 'vivaldi': os.path.join(config, 'vivaldi'),
'whale': os.path.join(config, 'naver-whale'),
}[browser_name] }[browser_name]
# Linux keyring names can be determined by snooping on dbus while opening the browser in KDE: # Linux keyring names can be determined by snooping on dbus while opening the browser in KDE:
@@ -265,7 +235,6 @@ def _get_chromium_based_browser_settings(browser_name):
'edge': 'Microsoft Edge' if sys.platform == 'darwin' else 'Chromium', 'edge': 'Microsoft Edge' if sys.platform == 'darwin' else 'Chromium',
'opera': 'Opera' if sys.platform == 'darwin' else 'Chromium', 'opera': 'Opera' if sys.platform == 'darwin' else 'Chromium',
'vivaldi': 'Vivaldi' if sys.platform == 'darwin' else 'Chrome', 'vivaldi': 'Vivaldi' if sys.platform == 'darwin' else 'Chrome',
'whale': 'Whale',
}[browser_name] }[browser_name]
browsers_without_profiles = {'opera'} browsers_without_profiles = {'opera'}
@@ -273,7 +242,7 @@ def _get_chromium_based_browser_settings(browser_name):
return { return {
'browser_dir': browser_dir, 'browser_dir': browser_dir,
'keyring_name': keyring_name, 'keyring_name': keyring_name,
'supports_profiles': browser_name not in browsers_without_profiles, 'supports_profiles': browser_name not in browsers_without_profiles
} }
@@ -282,7 +251,7 @@ def _extract_chrome_cookies(browser_name, profile, keyring, logger):
if not sqlite3: if not sqlite3:
logger.warning(f'Cannot extract cookies from {browser_name} without sqlite3 support. ' logger.warning(f'Cannot extract cookies from {browser_name} without sqlite3 support. '
'Please use a Python interpreter compiled with sqlite3 support') 'Please use a python interpreter compiled with sqlite3 support')
return YoutubeDLCookieJar() return YoutubeDLCookieJar()
config = _get_chromium_based_browser_settings(browser_name) config = _get_chromium_based_browser_settings(browser_name)
@@ -299,23 +268,17 @@ def _extract_chrome_cookies(browser_name, profile, keyring, logger):
logger.error(f'{browser_name} does not support profiles') logger.error(f'{browser_name} does not support profiles')
search_root = config['browser_dir'] search_root = config['browser_dir']
cookie_database_path = _newest(_find_files(search_root, 'Cookies', logger)) cookie_database_path = _find_most_recently_used_file(search_root, 'Cookies', logger)
if cookie_database_path is None: if cookie_database_path is None:
raise FileNotFoundError(f'could not find {browser_name} cookies database in "{search_root}"') raise FileNotFoundError(f'could not find {browser_name} cookies database in "{search_root}"')
logger.debug(f'Extracting cookies from: "{cookie_database_path}"') logger.debug(f'Extracting cookies from: "{cookie_database_path}"')
decryptor = get_cookie_decryptor(config['browser_dir'], config['keyring_name'], logger, keyring=keyring)
with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir: with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None cursor = None
try: try:
cursor = _open_database_copy(cookie_database_path, tmpdir) cursor = _open_database_copy(cookie_database_path, tmpdir)
# meta_version is necessary to determine if we need to trim the hash prefix from the cookies
# Ref: https://chromium.googlesource.com/chromium/src/+/b02dcebd7cafab92770734dc2bc317bd07f1d891/net/extras/sqlite/sqlite_persistent_cookie_store.cc#223
meta_version = int(cursor.execute('SELECT value FROM meta WHERE key = "version"').fetchone()[0])
decryptor = get_cookie_decryptor(
config['browser_dir'], config['keyring_name'], logger,
keyring=keyring, meta_version=meta_version)
cursor.connection.text_factory = bytes cursor.connection.text_factory = bytes
column_names = _get_column_names(cursor, 'cookies') column_names = _get_column_names(cursor, 'cookies')
secure_column = 'is_secure' if 'is_secure' in column_names else 'secure' secure_column = 'is_secure' if 'is_secure' in column_names else 'secure'
@@ -344,12 +307,6 @@ def _extract_chrome_cookies(browser_name, profile, keyring, logger):
counts['unencrypted'] = unencrypted_cookies counts['unencrypted'] = unencrypted_cookies
logger.debug(f'cookie version breakdown: {counts}') logger.debug(f'cookie version breakdown: {counts}')
return jar return jar
except PermissionError as error:
if os.name == 'nt' and error.errno == 13:
message = 'Could not copy Chrome cookie database. See https://github.com/yt-dlp/yt-dlp/issues/7271 for more info'
logger.error(message)
raise DownloadError(message) # force exit
raise
finally: finally:
if cursor is not None: if cursor is not None:
cursor.connection.close() cursor.connection.close()
@@ -367,11 +324,6 @@ def _process_chrome_cookie(decryptor, host_key, name, value, encrypted_value, pa
if value is None: if value is None:
return is_encrypted, None return is_encrypted, None
# In chrome, session cookies have expires_utc set to 0
# In our cookie-store, cookies that do not expire should have expires set to None
if not expires_utc:
expires_utc = None
return is_encrypted, http.cookiejar.Cookie( return is_encrypted, http.cookiejar.Cookie(
version=0, name=name, value=value, port=None, port_specified=False, version=0, name=name, value=value, port=None, port_specified=False,
domain=host_key, domain_specified=bool(host_key), domain_initial_dot=host_key.startswith('.'), domain=host_key, domain_specified=bool(host_key), domain_initial_dot=host_key.startswith('.'),
@@ -413,23 +365,22 @@ class ChromeCookieDecryptor:
raise NotImplementedError('Must be implemented by sub classes') raise NotImplementedError('Must be implemented by sub classes')
def get_cookie_decryptor(browser_root, browser_keyring_name, logger, *, keyring=None, meta_version=None): def get_cookie_decryptor(browser_root, browser_keyring_name, logger, *, keyring=None):
if sys.platform == 'darwin': if sys.platform == 'darwin':
return MacChromeCookieDecryptor(browser_keyring_name, logger, meta_version=meta_version) return MacChromeCookieDecryptor(browser_keyring_name, logger)
elif sys.platform in ('win32', 'cygwin'): elif sys.platform in ('win32', 'cygwin'):
return WindowsChromeCookieDecryptor(browser_root, logger, meta_version=meta_version) return WindowsChromeCookieDecryptor(browser_root, logger)
return LinuxChromeCookieDecryptor(browser_keyring_name, logger, keyring=keyring, meta_version=meta_version) return LinuxChromeCookieDecryptor(browser_keyring_name, logger, keyring=keyring)
class LinuxChromeCookieDecryptor(ChromeCookieDecryptor): class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_keyring_name, logger, *, keyring=None, meta_version=None): def __init__(self, browser_keyring_name, logger, *, keyring=None):
self._logger = logger self._logger = logger
self._v10_key = self.derive_key(b'peanuts') self._v10_key = self.derive_key(b'peanuts')
self._empty_key = self.derive_key(b'') self._empty_key = self.derive_key(b'')
self._cookie_counts = {'v10': 0, 'v11': 0, 'other': 0} self._cookie_counts = {'v10': 0, 'v11': 0, 'other': 0}
self._browser_keyring_name = browser_keyring_name self._browser_keyring_name = browser_keyring_name
self._keyring = keyring self._keyring = keyring
self._meta_version = meta_version or 0
@functools.cached_property @functools.cached_property
def _v11_key(self): def _v11_key(self):
@@ -458,18 +409,14 @@ class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
if version == b'v10': if version == b'v10':
self._cookie_counts['v10'] += 1 self._cookie_counts['v10'] += 1
return _decrypt_aes_cbc_multi( return _decrypt_aes_cbc_multi(ciphertext, (self._v10_key, self._empty_key), self._logger)
ciphertext, (self._v10_key, self._empty_key), self._logger,
hash_prefix=self._meta_version >= 24)
elif version == b'v11': elif version == b'v11':
self._cookie_counts['v11'] += 1 self._cookie_counts['v11'] += 1
if self._v11_key is None: if self._v11_key is None:
self._logger.warning('cannot decrypt v11 cookies: no key found', only_once=True) self._logger.warning('cannot decrypt v11 cookies: no key found', only_once=True)
return None return None
return _decrypt_aes_cbc_multi( return _decrypt_aes_cbc_multi(ciphertext, (self._v11_key, self._empty_key), self._logger)
ciphertext, (self._v11_key, self._empty_key), self._logger,
hash_prefix=self._meta_version >= 24)
else: else:
self._logger.warning(f'unknown cookie version: "{version}"', only_once=True) self._logger.warning(f'unknown cookie version: "{version}"', only_once=True)
@@ -478,12 +425,11 @@ class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
class MacChromeCookieDecryptor(ChromeCookieDecryptor): class MacChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_keyring_name, logger, meta_version=None): def __init__(self, browser_keyring_name, logger):
self._logger = logger self._logger = logger
password = _get_mac_keyring_password(browser_keyring_name, logger) password = _get_mac_keyring_password(browser_keyring_name, logger)
self._v10_key = None if password is None else self.derive_key(password) self._v10_key = None if password is None else self.derive_key(password)
self._cookie_counts = {'v10': 0, 'other': 0} self._cookie_counts = {'v10': 0, 'other': 0}
self._meta_version = meta_version or 0
@staticmethod @staticmethod
def derive_key(password): def derive_key(password):
@@ -501,8 +447,7 @@ class MacChromeCookieDecryptor(ChromeCookieDecryptor):
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True) self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
return None return None
return _decrypt_aes_cbc_multi( return _decrypt_aes_cbc_multi(ciphertext, (self._v10_key,), self._logger)
ciphertext, (self._v10_key,), self._logger, hash_prefix=self._meta_version >= 24)
else: else:
self._cookie_counts['other'] += 1 self._cookie_counts['other'] += 1
@@ -512,11 +457,10 @@ class MacChromeCookieDecryptor(ChromeCookieDecryptor):
class WindowsChromeCookieDecryptor(ChromeCookieDecryptor): class WindowsChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_root, logger, meta_version=None): def __init__(self, browser_root, logger):
self._logger = logger self._logger = logger
self._v10_key = _get_windows_v10_key(browser_root, logger) self._v10_key = _get_windows_v10_key(browser_root, logger)
self._cookie_counts = {'v10': 0, 'other': 0} self._cookie_counts = {'v10': 0, 'other': 0}
self._meta_version = meta_version or 0
def decrypt(self, encrypted_value): def decrypt(self, encrypted_value):
version = encrypted_value[:3] version = encrypted_value[:3]
@@ -540,9 +484,7 @@ class WindowsChromeCookieDecryptor(ChromeCookieDecryptor):
ciphertext = raw_ciphertext[nonce_length:-authentication_tag_length] ciphertext = raw_ciphertext[nonce_length:-authentication_tag_length]
authentication_tag = raw_ciphertext[-authentication_tag_length:] authentication_tag = raw_ciphertext[-authentication_tag_length:]
return _decrypt_aes_gcm( return _decrypt_aes_gcm(ciphertext, self._v10_key, nonce, authentication_tag, self._logger)
ciphertext, self._v10_key, nonce, authentication_tag, self._logger,
hash_prefix=self._meta_version >= 24)
else: else:
self._cookie_counts['other'] += 1 self._cookie_counts['other'] += 1
@@ -633,7 +575,7 @@ class DataParser:
def _mac_absolute_time_to_posix(timestamp): def _mac_absolute_time_to_posix(timestamp):
return int((dt.datetime(2001, 1, 1, 0, 0, tzinfo=dt.timezone.utc) + dt.timedelta(seconds=timestamp)).timestamp()) return int((datetime(2001, 1, 1, 0, 0, tzinfo=timezone.utc) + timedelta(seconds=timestamp)).timestamp())
def _parse_safari_cookies_header(data, logger): def _parse_safari_cookies_header(data, logger):
@@ -766,38 +708,40 @@ def _get_linux_desktop_environment(env, logger):
xdg_current_desktop = env.get('XDG_CURRENT_DESKTOP', None) xdg_current_desktop = env.get('XDG_CURRENT_DESKTOP', None)
desktop_session = env.get('DESKTOP_SESSION', None) desktop_session = env.get('DESKTOP_SESSION', None)
if xdg_current_desktop is not None: if xdg_current_desktop is not None:
for part in map(str.strip, xdg_current_desktop.split(':')): xdg_current_desktop = xdg_current_desktop.split(':')[0].strip()
if part == 'Unity':
if desktop_session is not None and 'gnome-fallback' in desktop_session: if xdg_current_desktop == 'Unity':
return _LinuxDesktopEnvironment.GNOME if desktop_session is not None and 'gnome-fallback' in desktop_session:
else:
return _LinuxDesktopEnvironment.UNITY
elif part == 'Deepin':
return _LinuxDesktopEnvironment.DEEPIN
elif part == 'GNOME':
return _LinuxDesktopEnvironment.GNOME return _LinuxDesktopEnvironment.GNOME
elif part == 'X-Cinnamon': else:
return _LinuxDesktopEnvironment.CINNAMON return _LinuxDesktopEnvironment.UNITY
elif part == 'KDE': elif xdg_current_desktop == 'Deepin':
kde_version = env.get('KDE_SESSION_VERSION', None) return _LinuxDesktopEnvironment.DEEPIN
if kde_version == '5': elif xdg_current_desktop == 'GNOME':
return _LinuxDesktopEnvironment.KDE5 return _LinuxDesktopEnvironment.GNOME
elif kde_version == '6': elif xdg_current_desktop == 'X-Cinnamon':
return _LinuxDesktopEnvironment.KDE6 return _LinuxDesktopEnvironment.CINNAMON
elif kde_version == '4': elif xdg_current_desktop == 'KDE':
return _LinuxDesktopEnvironment.KDE4 kde_version = env.get('KDE_SESSION_VERSION', None)
else: if kde_version == '5':
logger.info(f'unknown KDE version: "{kde_version}". Assuming KDE4') return _LinuxDesktopEnvironment.KDE5
return _LinuxDesktopEnvironment.KDE4 elif kde_version == '6':
elif part == 'Pantheon': return _LinuxDesktopEnvironment.KDE6
return _LinuxDesktopEnvironment.PANTHEON elif kde_version == '4':
elif part == 'XFCE': return _LinuxDesktopEnvironment.KDE4
return _LinuxDesktopEnvironment.XFCE else:
elif part == 'UKUI': logger.info(f'unknown KDE version: "{kde_version}". Assuming KDE4')
return _LinuxDesktopEnvironment.UKUI return _LinuxDesktopEnvironment.KDE4
elif part == 'LXQt': elif xdg_current_desktop == 'Pantheon':
return _LinuxDesktopEnvironment.LXQT return _LinuxDesktopEnvironment.PANTHEON
logger.info(f'XDG_CURRENT_DESKTOP is set to an unknown value: "{xdg_current_desktop}"') elif xdg_current_desktop == 'XFCE':
return _LinuxDesktopEnvironment.XFCE
elif xdg_current_desktop == 'UKUI':
return _LinuxDesktopEnvironment.UKUI
elif xdg_current_desktop == 'LXQt':
return _LinuxDesktopEnvironment.LXQT
else:
logger.info(f'XDG_CURRENT_DESKTOP is set to an unknown value: "{xdg_current_desktop}"')
elif desktop_session is not None: elif desktop_session is not None:
if desktop_session == 'deepin': if desktop_session == 'deepin':
@@ -850,7 +794,7 @@ def _choose_linux_keyring(logger):
elif desktop_environment == _LinuxDesktopEnvironment.KDE6: elif desktop_environment == _LinuxDesktopEnvironment.KDE6:
linux_keyring = _LinuxKeyring.KWALLET6 linux_keyring = _LinuxKeyring.KWALLET6
elif desktop_environment in ( elif desktop_environment in (
_LinuxDesktopEnvironment.KDE3, _LinuxDesktopEnvironment.LXQT, _LinuxDesktopEnvironment.OTHER, _LinuxDesktopEnvironment.KDE3, _LinuxDesktopEnvironment.LXQT, _LinuxDesktopEnvironment.OTHER
): ):
linux_keyring = _LinuxKeyring.BASICTEXT linux_keyring = _LinuxKeyring.BASICTEXT
else: else:
@@ -885,7 +829,7 @@ def _get_kwallet_network_wallet(keyring, logger):
'dbus-send', '--session', '--print-reply=literal', 'dbus-send', '--session', '--print-reply=literal',
f'--dest={service_name}', f'--dest={service_name}',
wallet_path, wallet_path,
'org.kde.KWallet.networkWallet', 'org.kde.KWallet.networkWallet'
], text=True, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL) ], text=True, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
if returncode: if returncode:
@@ -915,7 +859,7 @@ def _get_kwallet_password(browser_keyring_name, keyring, logger):
'kwallet-query', 'kwallet-query',
'--read-password', f'{browser_keyring_name} Safe Storage', '--read-password', f'{browser_keyring_name} Safe Storage',
'--folder', f'{browser_keyring_name} Keys', '--folder', f'{browser_keyring_name} Keys',
network_wallet, network_wallet
], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL) ], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
if returncode: if returncode:
@@ -955,8 +899,9 @@ def _get_gnome_keyring_password(browser_keyring_name, logger):
for item in col.get_all_items(): for item in col.get_all_items():
if item.get_label() == f'{browser_keyring_name} Safe Storage': if item.get_label() == f'{browser_keyring_name} Safe Storage':
return item.get_secret() return item.get_secret()
logger.error('failed to read from keyring') else:
return b'' logger.error('failed to read from keyring')
return b''
def _get_linux_keyring_password(browser_keyring_name, keyring, logger): def _get_linux_keyring_password(browser_keyring_name, keyring, logger):
@@ -1002,7 +947,7 @@ def _get_windows_v10_key(browser_root, logger):
References: References:
- [1] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_win.cc - [1] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/sync/os_crypt_win.cc
""" """
path = _newest(_find_files(browser_root, 'Local State', logger)) path = _find_most_recently_used_file(browser_root, 'Local State', logger)
if path is None: if path is None:
logger.error('could not find local state file') logger.error('could not find local state file')
return None return None
@@ -1025,15 +970,13 @@ def _get_windows_v10_key(browser_root, logger):
def pbkdf2_sha1(password, salt, iterations, key_length): def pbkdf2_sha1(password, salt, iterations, key_length):
return hashlib.pbkdf2_hmac('sha1', password, salt, iterations, key_length) return pbkdf2_hmac('sha1', password, salt, iterations, key_length)
def _decrypt_aes_cbc_multi(ciphertext, keys, logger, initialization_vector=b' ' * 16, hash_prefix=False): def _decrypt_aes_cbc_multi(ciphertext, keys, logger, initialization_vector=b' ' * 16):
for key in keys: for key in keys:
plaintext = unpad_pkcs7(aes_cbc_decrypt_bytes(ciphertext, key, initialization_vector)) plaintext = unpad_pkcs7(aes_cbc_decrypt_bytes(ciphertext, key, initialization_vector))
try: try:
if hash_prefix:
return plaintext[32:].decode()
return plaintext.decode() return plaintext.decode()
except UnicodeDecodeError: except UnicodeDecodeError:
pass pass
@@ -1041,7 +984,7 @@ def _decrypt_aes_cbc_multi(ciphertext, keys, logger, initialization_vector=b' '
return None return None
def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger, hash_prefix=False): def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger):
try: try:
plaintext = aes_gcm_decrypt_and_verify_bytes(ciphertext, key, authentication_tag, nonce) plaintext = aes_gcm_decrypt_and_verify_bytes(ciphertext, key, authentication_tag, nonce)
except ValueError: except ValueError:
@@ -1049,8 +992,6 @@ def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger, hash_pr
return None return None
try: try:
if hash_prefix:
return plaintext[32:].decode()
return plaintext.decode() return plaintext.decode()
except UnicodeDecodeError: except UnicodeDecodeError:
logger.warning('failed to decrypt cookie (AES-GCM) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True) logger.warning('failed to decrypt cookie (AES-GCM) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True)
@@ -1080,12 +1021,11 @@ def _decrypt_windows_dpapi(ciphertext, logger):
None, # pvReserved: must be NULL None, # pvReserved: must be NULL
None, # pPromptStruct: information about prompts to display None, # pPromptStruct: information about prompts to display
0, # dwFlags 0, # dwFlags
ctypes.byref(blob_out), # pDataOut ctypes.byref(blob_out) # pDataOut
) )
if not ret: if not ret:
message = 'Failed to decrypt with DPAPI. See https://github.com/yt-dlp/yt-dlp/issues/10927 for more info' logger.warning('failed to decrypt with DPAPI', only_once=True)
logger.error(message) return None
raise DownloadError(message) # force exit
result = ctypes.string_at(blob_out.pbData, blob_out.cbData) result = ctypes.string_at(blob_out.pbData, blob_out.cbData)
ctypes.windll.kernel32.LocalFree(blob_out.pbData) ctypes.windll.kernel32.LocalFree(blob_out.pbData)
@@ -1109,20 +1049,17 @@ def _get_column_names(cursor, table_name):
return [row[1].decode() for row in table_info] return [row[1].decode() for row in table_info]
def _newest(files): def _find_most_recently_used_file(root, filename, logger):
return max(files, key=lambda path: os.lstat(path).st_mtime, default=None)
def _find_files(root, filename, logger):
# if there are multiple browser profiles, take the most recently used one # if there are multiple browser profiles, take the most recently used one
i = 0 i, paths = 0, []
with _create_progress_bar(logger) as progress_bar: with _create_progress_bar(logger) as progress_bar:
for curr_root, _, files in os.walk(root): for curr_root, dirs, files in os.walk(root):
for file in files: for file in files:
i += 1 i += 1
progress_bar.print(f'Searching for "{filename}": {i: 6d} files searched') progress_bar.print(f'Searching for "{filename}": {i: 6d} files searched')
if file == filename: if file == filename:
yield os.path.join(curr_root, file) paths.append(os.path.join(curr_root, file))
return None if not paths else max(paths, key=lambda path: os.lstat(path).st_mtime)
def _merge_cookie_jars(jars): def _merge_cookie_jars(jars):
@@ -1136,7 +1073,7 @@ def _merge_cookie_jars(jars):
def _is_path(value): def _is_path(value):
return any(sep in value for sep in (os.path.sep, os.path.altsep) if sep) return os.path.sep in value
def _parse_browser_specification(browser_name, profile=None, keyring=None, container=None): def _parse_browser_specification(browser_name, profile=None, keyring=None, container=None):
@@ -1157,24 +1094,24 @@ class LenientSimpleCookie(http.cookies.SimpleCookie):
_LEGAL_VALUE_CHARS = _LEGAL_KEY_CHARS + re.escape('(),/<=>?@[]{}') _LEGAL_VALUE_CHARS = _LEGAL_KEY_CHARS + re.escape('(),/<=>?@[]{}')
_RESERVED = { _RESERVED = {
'expires', "expires",
'path', "path",
'comment', "comment",
'domain', "domain",
'max-age', "max-age",
'secure', "secure",
'httponly', "httponly",
'version', "version",
'samesite', "samesite",
} }
_FLAGS = {'secure', 'httponly'} _FLAGS = {"secure", "httponly"}
# Added 'bad' group to catch the remaining value # Added 'bad' group to catch the remaining value
_COOKIE_PATTERN = re.compile(r''' _COOKIE_PATTERN = re.compile(r"""
\s* # Optional whitespace at start of cookie \s* # Optional whitespace at start of cookie
(?P<key> # Start of group 'key' (?P<key> # Start of group 'key'
[''' + _LEGAL_KEY_CHARS + r''']+?# Any word of at least one letter [""" + _LEGAL_KEY_CHARS + r"""]+?# Any word of at least one letter
) # End of group 'key' ) # End of group 'key'
( # Optional group: there may not be a value. ( # Optional group: there may not be a value.
\s*=\s* # Equal Sign \s*=\s* # Equal Sign
@@ -1184,7 +1121,7 @@ class LenientSimpleCookie(http.cookies.SimpleCookie):
| # or | # or
\w{3},\s[\w\d\s-]{9,11}\s[\d:]{8}\sGMT # Special case for "expires" attr \w{3},\s[\w\d\s-]{9,11}\s[\d:]{8}\sGMT # Special case for "expires" attr
| # or | # or
[''' + _LEGAL_VALUE_CHARS + r''']* # Any word or empty string [""" + _LEGAL_VALUE_CHARS + r"""]* # Any word or empty string
) # End of group 'val' ) # End of group 'val'
| # or | # or
(?P<bad>(?:\\;|[^;])*?) # 'bad' group fallback for invalid values (?P<bad>(?:\\;|[^;])*?) # 'bad' group fallback for invalid values
@@ -1192,7 +1129,7 @@ class LenientSimpleCookie(http.cookies.SimpleCookie):
)? # End of optional value group )? # End of optional value group
\s* # Any number of spaces. \s* # Any number of spaces.
(\s+|;|$) # Ending either at space, semicolon, or EOS. (\s+|;|$) # Ending either at space, semicolon, or EOS.
''', re.ASCII | re.VERBOSE) """, re.ASCII | re.VERBOSE)
def load(self, data): def load(self, data):
# Workaround for https://github.com/yt-dlp/yt-dlp/issues/4776 # Workaround for https://github.com/yt-dlp/yt-dlp/issues/4776
@@ -1279,8 +1216,8 @@ class YoutubeDLCookieJar(http.cookiejar.MozillaCookieJar):
def _really_save(self, f, ignore_discard, ignore_expires): def _really_save(self, f, ignore_discard, ignore_expires):
now = time.time() now = time.time()
for cookie in self: for cookie in self:
if ((not ignore_discard and cookie.discard) if (not ignore_discard and cookie.discard
or (not ignore_expires and cookie.is_expired(now))): or not ignore_expires and cookie.is_expired(now)):
continue continue
name, value = cookie.name, cookie.value name, value = cookie.name, cookie.value
if value is None: if value is None:
@@ -1288,14 +1225,14 @@ class YoutubeDLCookieJar(http.cookiejar.MozillaCookieJar):
# with no name, whereas http.cookiejar regards it as a # with no name, whereas http.cookiejar regards it as a
# cookie with no value. # cookie with no value.
name, value = '', name name, value = '', name
f.write('{}\n'.format('\t'.join(( f.write('%s\n' % '\t'.join((
cookie.domain, cookie.domain,
self._true_or_false(cookie.domain.startswith('.')), self._true_or_false(cookie.domain.startswith('.')),
cookie.path, cookie.path,
self._true_or_false(cookie.secure), self._true_or_false(cookie.secure),
str_or_none(cookie.expires, default=''), str_or_none(cookie.expires, default=''),
name, value, name, value
)))) )))
def save(self, filename=None, ignore_discard=True, ignore_expires=True): def save(self, filename=None, ignore_discard=True, ignore_expires=True):
""" """
@@ -1334,10 +1271,10 @@ class YoutubeDLCookieJar(http.cookiejar.MozillaCookieJar):
return line return line
cookie_list = line.split('\t') cookie_list = line.split('\t')
if len(cookie_list) != self._ENTRY_LEN: if len(cookie_list) != self._ENTRY_LEN:
raise http.cookiejar.LoadError(f'invalid length {len(cookie_list)}') raise http.cookiejar.LoadError('invalid length %d' % len(cookie_list))
cookie = self._CookieFileEntry(*cookie_list) cookie = self._CookieFileEntry(*cookie_list)
if cookie.expires_at and not cookie.expires_at.isdigit(): if cookie.expires_at and not cookie.expires_at.isdigit():
raise http.cookiejar.LoadError(f'invalid expires at {cookie.expires_at}') raise http.cookiejar.LoadError('invalid expires at %s' % cookie.expires_at)
return line return line
cf = io.StringIO() cf = io.StringIO()

View File

@@ -24,7 +24,7 @@ try:
from Crypto.Cipher import AES, PKCS1_OAEP, Blowfish, PKCS1_v1_5 # noqa: F401 from Crypto.Cipher import AES, PKCS1_OAEP, Blowfish, PKCS1_v1_5 # noqa: F401
from Crypto.Hash import CMAC, SHA1 # noqa: F401 from Crypto.Hash import CMAC, SHA1 # noqa: F401
from Crypto.PublicKey import RSA # noqa: F401 from Crypto.PublicKey import RSA # noqa: F401
except (ImportError, OSError): except ImportError:
__version__ = f'broken {__version__}'.strip() __version__ = f'broken {__version__}'.strip()

View File

@@ -43,28 +43,19 @@ except Exception as _err:
try: try:
import sqlite3 import sqlite3
# We need to get the underlying `sqlite` version, see https://github.com/yt-dlp/yt-dlp/issues/8152
sqlite3._yt_dlp__version = sqlite3.sqlite_version
except ImportError: except ImportError:
# although sqlite3 is part of the standard library, it is possible to compile Python without # although sqlite3 is part of the standard library, it is possible to compile python without
# sqlite support. See: https://github.com/yt-dlp/yt-dlp/issues/544 # sqlite support. See: https://github.com/yt-dlp/yt-dlp/issues/544
sqlite3 = None sqlite3 = None
try: try:
import websockets import websockets
except ImportError: except (ImportError, SyntaxError):
# websockets 3.10 on python 3.6 causes SyntaxError
# See https://github.com/yt-dlp/yt-dlp/issues/2633
websockets = None websockets = None
try:
import urllib3
except ImportError:
urllib3 = None
try:
import requests
except ImportError:
requests = None
try: try:
import xattr # xattr or pyxattr import xattr # xattr or pyxattr
@@ -74,10 +65,6 @@ else:
if hasattr(xattr, 'set'): # pyxattr if hasattr(xattr, 'set'): # pyxattr
xattr._yt_dlp__identifier = 'pyxattr' xattr._yt_dlp__identifier = 'pyxattr'
try:
import curl_cffi
except ImportError:
curl_cffi = None
from . import Cryptodome from . import Cryptodome

View File

@@ -30,12 +30,11 @@ from .hls import HlsFD
from .http import HttpFD from .http import HttpFD
from .ism import IsmFD from .ism import IsmFD
from .mhtml import MhtmlFD from .mhtml import MhtmlFD
from .niconico import NiconicoLiveFD from .niconico import NiconicoDmcFD, NiconicoLiveFD
from .rtmp import RtmpFD from .rtmp import RtmpFD
from .rtsp import RtspFD from .rtsp import RtspFD
from .websocket import WebSocketFragmentFD from .websocket import WebSocketFragmentFD
from .youtube_live_chat import YoutubeLiveChatFD from .youtube_live_chat import YoutubeLiveChatFD
from .bunnycdn import BunnyCdnFD
PROTOCOL_MAP = { PROTOCOL_MAP = {
'rtmp': RtmpFD, 'rtmp': RtmpFD,
@@ -50,12 +49,12 @@ PROTOCOL_MAP = {
'http_dash_segments_generator': DashSegmentsFD, 'http_dash_segments_generator': DashSegmentsFD,
'ism': IsmFD, 'ism': IsmFD,
'mhtml': MhtmlFD, 'mhtml': MhtmlFD,
'niconico_dmc': NiconicoDmcFD,
'niconico_live': NiconicoLiveFD, 'niconico_live': NiconicoLiveFD,
'fc2_live': FC2LiveFD, 'fc2_live': FC2LiveFD,
'websocket_frag': WebSocketFragmentFD, 'websocket_frag': WebSocketFragmentFD,
'youtube_live_chat': YoutubeLiveChatFD, 'youtube_live_chat': YoutubeLiveChatFD,
'youtube_live_chat_replay': YoutubeLiveChatFD, 'youtube_live_chat_replay': YoutubeLiveChatFD,
'bunnycdn': BunnyCdnFD,
} }
@@ -66,6 +65,7 @@ def shorten_protocol_name(proto, simplify=False):
'rtmp_ffmpeg': 'rtmpF', 'rtmp_ffmpeg': 'rtmpF',
'http_dash_segments': 'dash', 'http_dash_segments': 'dash',
'http_dash_segments_generator': 'dashG', 'http_dash_segments_generator': 'dashG',
'niconico_dmc': 'dmc',
'websocket_frag': 'WSfrag', 'websocket_frag': 'WSfrag',
} }
if simplify: if simplify:

View File

@@ -1,50 +0,0 @@
import hashlib
import random
import threading
from .common import FileDownloader
from . import HlsFD
from ..networking import Request
from ..networking.exceptions import network_exceptions
class BunnyCdnFD(FileDownloader):
"""
Downloads from BunnyCDN with required pings
Note, this is not a part of public API, and will be removed without notice.
DO NOT USE
"""
def real_download(self, filename, info_dict):
self.to_screen(f'[{self.FD_NAME}] Downloading from BunnyCDN')
fd = HlsFD(self.ydl, self.params)
stop_event = threading.Event()
ping_thread = threading.Thread(target=self.ping_thread, args=(stop_event,), kwargs=info_dict['_bunnycdn_ping_data'])
ping_thread.start()
try:
return fd.real_download(filename, info_dict)
finally:
stop_event.set()
def ping_thread(self, stop_event, url, headers, secret, context_id):
# Site sends ping every 4 seconds, but this throttles the download. Pinging every 2 seconds seems to work.
ping_interval = 2
# Hard coded resolution as it doesn't seem to matter
res = 1080
paused = 'false'
current_time = 0
while not stop_event.wait(ping_interval):
current_time += ping_interval
time = current_time + round(random.random(), 6)
md5_hash = hashlib.md5(f'{secret}_{context_id}_{time}_{paused}_{res}'.encode()).hexdigest()
ping_url = f'{url}?hash={md5_hash}&time={time}&paused={paused}&resolution={res}'
try:
self.ydl.urlopen(Request(ping_url, headers=headers)).read()
except network_exceptions as e:
self.to_screen(f'[{self.FD_NAME}] Ping failed: {e}')

View File

@@ -4,7 +4,6 @@ import functools
import os import os
import random import random
import re import re
import threading
import time import time
from ..minicurses import ( from ..minicurses import (
@@ -20,7 +19,9 @@ from ..utils import (
Namespace, Namespace,
RetryManager, RetryManager,
classproperty, classproperty,
decodeArgument,
deprecation_warning, deprecation_warning,
encodeFilename,
format_bytes, format_bytes,
join_nonempty, join_nonempty,
parse_bytes, parse_bytes,
@@ -31,7 +32,6 @@ from ..utils import (
timetuple_from_msec, timetuple_from_msec,
try_call, try_call,
) )
from ..utils._utils import _ProgressState
class FileDownloader: class FileDownloader:
@@ -63,7 +63,6 @@ class FileDownloader:
min_filesize: Skip files smaller than this size min_filesize: Skip files smaller than this size
max_filesize: Skip files larger than this size max_filesize: Skip files larger than this size
xattr_set_filesize: Set ytdl.filesize user xattribute with expected size. xattr_set_filesize: Set ytdl.filesize user xattribute with expected size.
progress_delta: The minimum time between progress output, in seconds
external_downloader_args: A dictionary of downloader keys (in lower case) external_downloader_args: A dictionary of downloader keys (in lower case)
and a list of additional command-line arguments for the and a list of additional command-line arguments for the
executable. Use 'default' as the name for arguments to be executable. Use 'default' as the name for arguments to be
@@ -89,9 +88,6 @@ class FileDownloader:
self.params = params self.params = params
self._prepare_multiline_status() self._prepare_multiline_status()
self.add_progress_hook(self.report_progress) self.add_progress_hook(self.report_progress)
if self.params.get('progress_delta'):
self._progress_delta_lock = threading.Lock()
self._progress_delta_time = time.monotonic()
def _set_ydl(self, ydl): def _set_ydl(self, ydl):
self.ydl = ydl self.ydl = ydl
@@ -218,7 +214,7 @@ class FileDownloader:
def temp_name(self, filename): def temp_name(self, filename):
"""Returns a temporary filename for the given filename.""" """Returns a temporary filename for the given filename."""
if self.params.get('nopart', False) or filename == '-' or \ if self.params.get('nopart', False) or filename == '-' or \
(os.path.exists(filename) and not os.path.isfile(filename)): (os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))):
return filename return filename
return filename + '.part' return filename + '.part'
@@ -272,7 +268,7 @@ class FileDownloader:
"""Try to set the last-modified time of the given file.""" """Try to set the last-modified time of the given file."""
if last_modified_hdr is None: if last_modified_hdr is None:
return return
if not os.path.isfile(filename): if not os.path.isfile(encodeFilename(filename)):
return return
timestr = last_modified_hdr timestr = last_modified_hdr
if timestr is None: if timestr is None:
@@ -334,7 +330,7 @@ class FileDownloader:
progress_dict), s.get('progress_idx') or 0) progress_dict), s.get('progress_idx') or 0)
self.to_console_title(self.ydl.evaluate_outtmpl( self.to_console_title(self.ydl.evaluate_outtmpl(
progress_template.get('download-title') or 'yt-dlp %(progress._default_template)s', progress_template.get('download-title') or 'yt-dlp %(progress._default_template)s',
progress_dict), _ProgressState.from_dict(s), s.get('_percent')) progress_dict))
def _format_progress(self, *args, **kwargs): def _format_progress(self, *args, **kwargs):
return self.ydl._format_text( return self.ydl._format_text(
@@ -358,7 +354,6 @@ class FileDownloader:
'_speed_str': self.format_speed(speed).strip(), '_speed_str': self.format_speed(speed).strip(),
'_total_bytes_str': _format_bytes('total_bytes'), '_total_bytes_str': _format_bytes('total_bytes'),
'_elapsed_str': self.format_seconds(s.get('elapsed')), '_elapsed_str': self.format_seconds(s.get('elapsed')),
'_percent': 100.0,
'_percent_str': self.format_percent(100), '_percent_str': self.format_percent(100),
}) })
self._report_progress_status(s, join_nonempty( self._report_progress_status(s, join_nonempty(
@@ -371,21 +366,13 @@ class FileDownloader:
if s['status'] != 'downloading': if s['status'] != 'downloading':
return return
if update_delta := self.params.get('progress_delta'):
with self._progress_delta_lock:
if time.monotonic() < self._progress_delta_time:
return
self._progress_delta_time += update_delta
progress = try_call(
lambda: 100 * s['downloaded_bytes'] / s['total_bytes'],
lambda: 100 * s['downloaded_bytes'] / s['total_bytes_estimate'],
lambda: s['downloaded_bytes'] == 0 and 0)
s.update({ s.update({
'_eta_str': self.format_eta(s.get('eta')).strip(), '_eta_str': self.format_eta(s.get('eta')).strip(),
'_speed_str': self.format_speed(s.get('speed')), '_speed_str': self.format_speed(s.get('speed')),
'_percent': progress, '_percent_str': self.format_percent(try_call(
'_percent_str': self.format_percent(progress), lambda: 100 * s['downloaded_bytes'] / s['total_bytes'],
lambda: 100 * s['downloaded_bytes'] / s['total_bytes_estimate'],
lambda: s['downloaded_bytes'] == 0 and 0)),
'_total_bytes_str': _format_bytes('total_bytes'), '_total_bytes_str': _format_bytes('total_bytes'),
'_total_bytes_estimate_str': _format_bytes('total_bytes_estimate'), '_total_bytes_estimate_str': _format_bytes('total_bytes_estimate'),
'_downloaded_bytes_str': _format_bytes('downloaded_bytes'), '_downloaded_bytes_str': _format_bytes('downloaded_bytes'),
@@ -406,7 +393,7 @@ class FileDownloader:
def report_resuming_byte(self, resume_len): def report_resuming_byte(self, resume_len):
"""Report attempt to resume at given byte.""" """Report attempt to resume at given byte."""
self.to_screen(f'[download] Resuming download at byte {resume_len}') self.to_screen('[download] Resuming download at byte %s' % resume_len)
def report_retry(self, err, count, retries, frag_index=NO_DEFAULT, fatal=True): def report_retry(self, err, count, retries, frag_index=NO_DEFAULT, fatal=True):
"""Report retry""" """Report retry"""
@@ -434,13 +421,13 @@ class FileDownloader:
""" """
nooverwrites_and_exists = ( nooverwrites_and_exists = (
not self.params.get('overwrites', True) not self.params.get('overwrites', True)
and os.path.exists(filename) and os.path.exists(encodeFilename(filename))
) )
if not hasattr(filename, 'write'): if not hasattr(filename, 'write'):
continuedl_and_exists = ( continuedl_and_exists = (
self.params.get('continuedl', True) self.params.get('continuedl', True)
and os.path.isfile(filename) and os.path.isfile(encodeFilename(filename))
and not self.params.get('nopart', False) and not self.params.get('nopart', False)
) )
@@ -450,7 +437,7 @@ class FileDownloader:
self._hook_progress({ self._hook_progress({
'filename': filename, 'filename': filename,
'status': 'finished', 'status': 'finished',
'total_bytes': os.path.getsize(filename), 'total_bytes': os.path.getsize(encodeFilename(filename)),
}, info_dict) }, info_dict)
self._finish_multiline_status() self._finish_multiline_status()
return True, False return True, False
@@ -491,7 +478,9 @@ class FileDownloader:
if not self.params.get('verbose', False): if not self.params.get('verbose', False):
return return
if exe is None: str_args = [decodeArgument(a) for a in args]
exe = os.path.basename(args[0])
self.write_debug(f'{exe} command line: {shell_quote(args)}') if exe is None:
exe = os.path.basename(str_args[0])
self.write_debug(f'{exe} command line: {shell_quote(str_args)}')

View File

@@ -15,15 +15,12 @@ class DashSegmentsFD(FragmentFD):
FD_NAME = 'dashsegments' FD_NAME = 'dashsegments'
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
if 'http_dash_segments_generator' in info_dict['protocol'].split('+'): if info_dict.get('is_live') and set(info_dict['protocol'].split('+')) != {'http_dash_segments_generator'}:
real_downloader = None # No external FD can support --live-from-start self.report_error('Live DASH videos are not supported')
else:
if info_dict.get('is_live'):
self.report_error('Live DASH videos are not supported')
real_downloader = get_suitable_downloader(
info_dict, self.params, None, protocol='dash_frag_urls', to_stdout=(filename == '-'))
real_start = time.time() real_start = time.time()
real_downloader = get_suitable_downloader(
info_dict, self.params, None, protocol='dash_frag_urls', to_stdout=(filename == '-'))
requested_formats = [{**info_dict, **fmt} for fmt in info_dict.get('requested_formats', [])] requested_formats = [{**info_dict, **fmt} for fmt in info_dict.get('requested_formats', [])]
args = [] args = []

View File

@@ -1,5 +1,4 @@
import enum import enum
import functools
import json import json
import os import os
import re import re
@@ -10,6 +9,7 @@ import time
import uuid import uuid
from .fragment import FragmentFD from .fragment import FragmentFD
from ..compat import functools
from ..networking import Request from ..networking import Request
from ..postprocessor.ffmpeg import EXT_TO_OUT_FORMATS, FFmpegPostProcessor from ..postprocessor.ffmpeg import EXT_TO_OUT_FORMATS, FFmpegPostProcessor
from ..utils import ( from ..utils import (
@@ -23,6 +23,7 @@ from ..utils import (
cli_valueless_option, cli_valueless_option,
determine_ext, determine_ext,
encodeArgument, encodeArgument,
encodeFilename,
find_available_port, find_available_port,
remove_end, remove_end,
traverse_obj, traverse_obj,
@@ -54,7 +55,7 @@ class ExternalFD(FragmentFD):
# correct and expected termination thus all postprocessing # correct and expected termination thus all postprocessing
# should take place # should take place
retval = 0 retval = 0
self.to_screen(f'[{self.get_basename()}] Interrupted by user') self.to_screen('[%s] Interrupted by user' % self.get_basename())
finally: finally:
if self._cookies_tempfile: if self._cookies_tempfile:
self.try_remove(self._cookies_tempfile) self.try_remove(self._cookies_tempfile)
@@ -66,7 +67,7 @@ class ExternalFD(FragmentFD):
'elapsed': time.time() - started, 'elapsed': time.time() - started,
} }
if filename != '-': if filename != '-':
fsize = os.path.getsize(tmpfilename) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
status.update({ status.update({
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,
@@ -107,7 +108,7 @@ class ExternalFD(FragmentFD):
return all(( return all((
not info_dict.get('to_stdout') or Features.TO_STDOUT in cls.SUPPORTED_FEATURES, not info_dict.get('to_stdout') or Features.TO_STDOUT in cls.SUPPORTED_FEATURES,
'+' not in info_dict['protocol'] or Features.MULTIPLE_FORMATS in cls.SUPPORTED_FEATURES, '+' not in info_dict['protocol'] or Features.MULTIPLE_FORMATS in cls.SUPPORTED_FEATURES,
not traverse_obj(info_dict, ('hls_aes', ...), 'extra_param_to_segment_url', 'extra_param_to_key_url'), not traverse_obj(info_dict, ('hls_aes', ...), 'extra_param_to_segment_url'),
all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+')), all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+')),
)) ))
@@ -171,7 +172,7 @@ class ExternalFD(FragmentFD):
decrypt_fragment = self.decrypter(info_dict) decrypt_fragment = self.decrypter(info_dict)
dest, _ = self.sanitize_open(tmpfilename, 'wb') dest, _ = self.sanitize_open(tmpfilename, 'wb')
for frag_index, fragment in enumerate(info_dict['fragments']): for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = f'{tmpfilename}-Frag{frag_index}' fragment_filename = '%s-Frag%d' % (tmpfilename, frag_index)
try: try:
src, _ = self.sanitize_open(fragment_filename, 'rb') src, _ = self.sanitize_open(fragment_filename, 'rb')
except OSError as err: except OSError as err:
@@ -183,9 +184,9 @@ class ExternalFD(FragmentFD):
dest.write(decrypt_fragment(fragment, src.read())) dest.write(decrypt_fragment(fragment, src.read()))
src.close() src.close()
if not self.params.get('keep_fragments', False): if not self.params.get('keep_fragments', False):
self.try_remove(fragment_filename) self.try_remove(encodeFilename(fragment_filename))
dest.close() dest.close()
self.try_remove(f'{tmpfilename}.frag.urls') self.try_remove(encodeFilename('%s.frag.urls' % tmpfilename))
return 0 return 0
def _call_process(self, cmd, info_dict): def _call_process(self, cmd, info_dict):
@@ -334,12 +335,12 @@ class Aria2cFD(ExternalFD):
cmd += ['--auto-file-renaming=false'] cmd += ['--auto-file-renaming=false']
if 'fragments' in info_dict: if 'fragments' in info_dict:
cmd += ['--uri-selector=inorder'] cmd += ['--file-allocation=none', '--uri-selector=inorder']
url_list_file = f'{tmpfilename}.frag.urls' url_list_file = '%s.frag.urls' % tmpfilename
url_list = [] url_list = []
for frag_index, fragment in enumerate(info_dict['fragments']): for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = f'{os.path.basename(tmpfilename)}-Frag{frag_index}' fragment_filename = '%s-Frag%d' % (os.path.basename(tmpfilename), frag_index)
url_list.append('{}\n\tout={}'.format(fragment['url'], self._aria2c_filename(fragment_filename))) url_list.append('%s\n\tout=%s' % (fragment['url'], self._aria2c_filename(fragment_filename)))
stream, _ = self.sanitize_open(url_list_file, 'wb') stream, _ = self.sanitize_open(url_list_file, 'wb')
stream.write('\n'.join(url_list).encode()) stream.write('\n'.join(url_list).encode())
stream.close() stream.close()
@@ -356,7 +357,7 @@ class Aria2cFD(ExternalFD):
'id': sanitycheck, 'id': sanitycheck,
'method': method, 'method': method,
'params': [f'token:{rpc_secret}', *params], 'params': [f'token:{rpc_secret}', *params],
}).encode() }).encode('utf-8')
request = Request( request = Request(
f'http://localhost:{rpc_port}/jsonrpc', f'http://localhost:{rpc_port}/jsonrpc',
data=d, headers={ data=d, headers={
@@ -415,7 +416,7 @@ class Aria2cFD(ExternalFD):
'total_bytes_estimate': total, 'total_bytes_estimate': total,
'eta': (total - downloaded) / (speed or 1), 'eta': (total - downloaded) / (speed or 1),
'fragment_index': min(frag_count, len(completed) + 1) if fragmented else None, 'fragment_index': min(frag_count, len(completed) + 1) if fragmented else None,
'elapsed': time.time() - started, 'elapsed': time.time() - started
}) })
self._hook_progress(status, info_dict) self._hook_progress(status, info_dict)
@@ -457,6 +458,8 @@ class FFmpegFD(ExternalFD):
@classmethod @classmethod
def available(cls, path=None): def available(cls, path=None):
# TODO: Fix path for ffmpeg
# Fixme: This may be wrong when --ffmpeg-location is used
return FFmpegPostProcessor().available return FFmpegPostProcessor().available
def on_process_started(self, proc, stdin): def on_process_started(self, proc, stdin):
@@ -488,7 +491,7 @@ class FFmpegFD(ExternalFD):
if not self.params.get('verbose'): if not self.params.get('verbose'):
args += ['-hide_banner'] args += ['-hide_banner']
args += traverse_obj(info_dict, ('downloader_options', 'ffmpeg_args', ...)) args += traverse_obj(info_dict, ('downloader_options', 'ffmpeg_args'), default=[])
# These exists only for compatibility. Extractors should use # These exists only for compatibility. Extractors should use
# info_dict['downloader_options']['ffmpeg_args'] instead # info_dict['downloader_options']['ffmpeg_args'] instead
@@ -505,13 +508,13 @@ class FFmpegFD(ExternalFD):
env = None env = None
proxy = self.params.get('proxy') proxy = self.params.get('proxy')
if proxy: if proxy:
if not re.match(r'[\da-zA-Z]+://', proxy): if not re.match(r'^[\da-zA-Z]+://', proxy):
proxy = f'http://{proxy}' proxy = 'http://%s' % proxy
if proxy.startswith('socks'): if proxy.startswith('socks'):
self.report_warning( self.report_warning(
f'{self.get_basename()} does not support SOCKS proxies. Downloading is likely to fail. ' '%s does not support SOCKS proxies. Downloading is likely to fail. '
'Consider adding --hls-prefer-native to your command.') 'Consider adding --hls-prefer-native to your command.' % self.get_basename())
# Since December 2015 ffmpeg supports -http_proxy option (see # Since December 2015 ffmpeg supports -http_proxy option (see
# http://git.videolan.org/?p=ffmpeg.git;a=commit;h=b4eb1f29ebddd60c41a2eb39f5af701e38e0d3fd) # http://git.videolan.org/?p=ffmpeg.git;a=commit;h=b4eb1f29ebddd60c41a2eb39f5af701e38e0d3fd)
@@ -556,7 +559,7 @@ class FFmpegFD(ExternalFD):
selected_formats = info_dict.get('requested_formats') or [info_dict] selected_formats = info_dict.get('requested_formats') or [info_dict]
for i, fmt in enumerate(selected_formats): for i, fmt in enumerate(selected_formats):
is_http = re.match(r'https?://', fmt['url']) is_http = re.match(r'^https?://', fmt['url'])
cookies = self.ydl.cookiejar.get_cookies_for_url(fmt['url']) if is_http else [] cookies = self.ydl.cookiejar.get_cookies_for_url(fmt['url']) if is_http else []
if cookies: if cookies:
args.extend(['-cookies', ''.join( args.extend(['-cookies', ''.join(
@@ -572,7 +575,7 @@ class FFmpegFD(ExternalFD):
if end_time: if end_time:
args += ['-t', str(end_time - start_time)] args += ['-t', str(end_time - start_time)]
args += [*self._configuration_args((f'_i{i + 1}', '_i')), '-i', fmt['url']] args += self._configuration_args((f'_i{i + 1}', '_i')) + ['-i', fmt['url']]
if not (start_time or end_time) or not self.params.get('force_keyframes_at_cuts'): if not (start_time or end_time) or not self.params.get('force_keyframes_at_cuts'):
args += ['-c', 'copy'] args += ['-c', 'copy']
@@ -612,12 +615,10 @@ class FFmpegFD(ExternalFD):
else: else:
args += ['-f', EXT_TO_OUT_FORMATS.get(ext, ext)] args += ['-f', EXT_TO_OUT_FORMATS.get(ext, ext)]
args += traverse_obj(info_dict, ('downloader_options', 'ffmpeg_args_out', ...))
args += self._configuration_args(('_o1', '_o', '')) args += self._configuration_args(('_o1', '_o', ''))
args = [encodeArgument(opt) for opt in args] args = [encodeArgument(opt) for opt in args]
args.append(ffpp._ffmpeg_filename_argument(tmpfilename)) args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True))
self._debug_cmd(args) self._debug_cmd(args)
piped = any(fmt['url'] in ('-', 'pipe:') for fmt in selected_formats) piped = any(fmt['url'] in ('-', 'pipe:') for fmt in selected_formats)

View File

@@ -67,12 +67,12 @@ class FlvReader(io.BytesIO):
self.read_bytes(3) self.read_bytes(3)
quality_entry_count = self.read_unsigned_char() quality_entry_count = self.read_unsigned_char()
# QualityEntryCount # QualityEntryCount
for _ in range(quality_entry_count): for i in range(quality_entry_count):
self.read_string() self.read_string()
segment_run_count = self.read_unsigned_int() segment_run_count = self.read_unsigned_int()
segments = [] segments = []
for _ in range(segment_run_count): for i in range(segment_run_count):
first_segment = self.read_unsigned_int() first_segment = self.read_unsigned_int()
fragments_per_segment = self.read_unsigned_int() fragments_per_segment = self.read_unsigned_int()
segments.append((first_segment, fragments_per_segment)) segments.append((first_segment, fragments_per_segment))
@@ -91,12 +91,12 @@ class FlvReader(io.BytesIO):
quality_entry_count = self.read_unsigned_char() quality_entry_count = self.read_unsigned_char()
# QualitySegmentUrlModifiers # QualitySegmentUrlModifiers
for _ in range(quality_entry_count): for i in range(quality_entry_count):
self.read_string() self.read_string()
fragments_count = self.read_unsigned_int() fragments_count = self.read_unsigned_int()
fragments = [] fragments = []
for _ in range(fragments_count): for i in range(fragments_count):
first = self.read_unsigned_int() first = self.read_unsigned_int()
first_ts = self.read_unsigned_long_long() first_ts = self.read_unsigned_long_long()
duration = self.read_unsigned_int() duration = self.read_unsigned_int()
@@ -135,11 +135,11 @@ class FlvReader(io.BytesIO):
self.read_string() # MovieIdentifier self.read_string() # MovieIdentifier
server_count = self.read_unsigned_char() server_count = self.read_unsigned_char()
# ServerEntryTable # ServerEntryTable
for _ in range(server_count): for i in range(server_count):
self.read_string() self.read_string()
quality_count = self.read_unsigned_char() quality_count = self.read_unsigned_char()
# QualityEntryTable # QualityEntryTable
for _ in range(quality_count): for i in range(quality_count):
self.read_string() self.read_string()
# DrmData # DrmData
self.read_string() self.read_string()
@@ -148,14 +148,14 @@ class FlvReader(io.BytesIO):
segments_count = self.read_unsigned_char() segments_count = self.read_unsigned_char()
segments = [] segments = []
for _ in range(segments_count): for i in range(segments_count):
box_size, box_type, box_data = self.read_box_info() box_size, box_type, box_data = self.read_box_info()
assert box_type == b'asrt' assert box_type == b'asrt'
segment = FlvReader(box_data).read_asrt() segment = FlvReader(box_data).read_asrt()
segments.append(segment) segments.append(segment)
fragments_run_count = self.read_unsigned_char() fragments_run_count = self.read_unsigned_char()
fragments = [] fragments = []
for _ in range(fragments_run_count): for i in range(fragments_run_count):
box_size, box_type, box_data = self.read_box_info() box_size, box_type, box_data = self.read_box_info()
assert box_type == b'afrt' assert box_type == b'afrt'
fragments.append(FlvReader(box_data).read_afrt()) fragments.append(FlvReader(box_data).read_afrt())
@@ -309,7 +309,7 @@ class F4mFD(FragmentFD):
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
man_url = info_dict['url'] man_url = info_dict['url']
requested_bitrate = info_dict.get('tbr') requested_bitrate = info_dict.get('tbr')
self.to_screen(f'[{self.FD_NAME}] Downloading f4m manifest') self.to_screen('[%s] Downloading f4m manifest' % self.FD_NAME)
urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url)) urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
man_url = urlh.url man_url = urlh.url
@@ -326,8 +326,8 @@ class F4mFD(FragmentFD):
formats = sorted(formats, key=lambda f: f[0]) formats = sorted(formats, key=lambda f: f[0])
rate, media = formats[-1] rate, media = formats[-1]
else: else:
rate, media = next(filter( rate, media = list(filter(
lambda f: int(f[0]) == requested_bitrate, formats)) lambda f: int(f[0]) == requested_bitrate, formats))[0]
# Prefer baseURL for relative URLs as per 11.2 of F4M 3.0 spec. # Prefer baseURL for relative URLs as per 11.2 of F4M 3.0 spec.
man_base_url = get_base_url(doc) or man_url man_base_url = get_base_url(doc) or man_url

View File

@@ -9,11 +9,11 @@ import time
from .common import FileDownloader from .common import FileDownloader
from .http import HttpFD from .http import HttpFD
from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7 from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7
from ..compat import compat_os_name
from ..networking import Request from ..networking import Request
from ..networking.exceptions import HTTPError, IncompleteRead from ..networking.exceptions import HTTPError, IncompleteRead
from ..utils import DownloadError, RetryManager, traverse_obj from ..utils import DownloadError, RetryManager, encodeFilename, traverse_obj
from ..utils.networking import HTTPHeaderDict from ..utils.networking import HTTPHeaderDict
from ..utils.progress import ProgressCalculator
class HttpQuietDownloader(HttpFD): class HttpQuietDownloader(HttpFD):
@@ -151,7 +151,7 @@ class FragmentFD(FileDownloader):
if self.__do_ytdl_file(ctx): if self.__do_ytdl_file(ctx):
self._write_ytdl_file(ctx) self._write_ytdl_file(ctx)
if not self.params.get('keep_fragments', False): if not self.params.get('keep_fragments', False):
self.try_remove(ctx['fragment_filename_sanitized']) self.try_remove(encodeFilename(ctx['fragment_filename_sanitized']))
del ctx['fragment_filename_sanitized'] del ctx['fragment_filename_sanitized']
def _prepare_frag_download(self, ctx): def _prepare_frag_download(self, ctx):
@@ -187,7 +187,7 @@ class FragmentFD(FileDownloader):
}) })
if self.__do_ytdl_file(ctx): if self.__do_ytdl_file(ctx):
ytdl_file_exists = os.path.isfile(self.ytdl_filename(ctx['filename'])) ytdl_file_exists = os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename'])))
continuedl = self.params.get('continuedl', True) continuedl = self.params.get('continuedl', True)
if continuedl and ytdl_file_exists: if continuedl and ytdl_file_exists:
self._read_ytdl_file(ctx) self._read_ytdl_file(ctx)
@@ -198,7 +198,7 @@ class FragmentFD(FileDownloader):
'.ytdl file is corrupt' if is_corrupt else '.ytdl file is corrupt' if is_corrupt else
'Inconsistent state of incomplete fragment download') 'Inconsistent state of incomplete fragment download')
self.report_warning( self.report_warning(
f'{message}. Restarting from the beginning ...') '%s. Restarting from the beginning ...' % message)
ctx['fragment_index'] = resume_len = 0 ctx['fragment_index'] = resume_len = 0
if 'ytdl_corrupt' in ctx: if 'ytdl_corrupt' in ctx:
del ctx['ytdl_corrupt'] del ctx['ytdl_corrupt']
@@ -226,7 +226,8 @@ class FragmentFD(FileDownloader):
resume_len = ctx['complete_frags_downloaded_bytes'] resume_len = ctx['complete_frags_downloaded_bytes']
total_frags = ctx['total_frags'] total_frags = ctx['total_frags']
ctx_id = ctx.get('ctx_id') ctx_id = ctx.get('ctx_id')
# Stores the download progress, updated by the progress hook # This dict stores the download progress, it's updated by the progress
# hook
state = { state = {
'status': 'downloading', 'status': 'downloading',
'downloaded_bytes': resume_len, 'downloaded_bytes': resume_len,
@@ -236,8 +237,14 @@ class FragmentFD(FileDownloader):
'tmpfilename': ctx['tmpfilename'], 'tmpfilename': ctx['tmpfilename'],
} }
ctx['started'] = time.time() start = time.time()
progress = ProgressCalculator(resume_len) ctx.update({
'started': start,
'fragment_started': start,
# Amount of fragment's bytes downloaded by the time of the previous
# frag progress hook invocation
'prev_frag_downloaded_bytes': 0,
})
def frag_progress_hook(s): def frag_progress_hook(s):
if s['status'] not in ('downloading', 'finished'): if s['status'] not in ('downloading', 'finished'):
@@ -252,35 +259,38 @@ class FragmentFD(FileDownloader):
state['max_progress'] = ctx.get('max_progress') state['max_progress'] = ctx.get('max_progress')
state['progress_idx'] = ctx.get('progress_idx') state['progress_idx'] = ctx.get('progress_idx')
state['elapsed'] = progress.elapsed time_now = time.time()
state['elapsed'] = time_now - start
frag_total_bytes = s.get('total_bytes') or 0 frag_total_bytes = s.get('total_bytes') or 0
s['fragment_info_dict'] = s.pop('info_dict', {}) s['fragment_info_dict'] = s.pop('info_dict', {})
# XXX: Fragment resume is not accounted for here
if not ctx['live']: if not ctx['live']:
estimated_size = ( estimated_size = (
(ctx['complete_frags_downloaded_bytes'] + frag_total_bytes) (ctx['complete_frags_downloaded_bytes'] + frag_total_bytes)
/ (state['fragment_index'] + 1) * total_frags) / (state['fragment_index'] + 1) * total_frags)
progress.total = estimated_size state['total_bytes_estimate'] = estimated_size
progress.update(s.get('downloaded_bytes'))
state['total_bytes_estimate'] = progress.total
else:
progress.update(s.get('downloaded_bytes'))
if s['status'] == 'finished': if s['status'] == 'finished':
state['fragment_index'] += 1 state['fragment_index'] += 1
ctx['fragment_index'] = state['fragment_index'] ctx['fragment_index'] = state['fragment_index']
progress.thread_reset() state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes']
ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes']
state['downloaded_bytes'] = ctx['complete_frags_downloaded_bytes'] = progress.downloaded ctx['speed'] = state['speed'] = self.calc_speed(
state['speed'] = ctx['speed'] = progress.speed.smooth ctx['fragment_started'], time_now, frag_total_bytes)
state['eta'] = progress.eta.smooth ctx['fragment_started'] = time.time()
ctx['prev_frag_downloaded_bytes'] = 0
else:
frag_downloaded_bytes = s['downloaded_bytes']
state['downloaded_bytes'] += frag_downloaded_bytes - ctx['prev_frag_downloaded_bytes']
ctx['speed'] = state['speed'] = self.calc_speed(
ctx['fragment_started'], time_now, frag_downloaded_bytes - ctx.get('frag_resume_len', 0))
if not ctx['live']:
state['eta'] = self.calc_eta(state['speed'], estimated_size - state['downloaded_bytes'])
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
self._hook_progress(state, info_dict) self._hook_progress(state, info_dict)
ctx['dl'].add_progress_hook(frag_progress_hook) ctx['dl'].add_progress_hook(frag_progress_hook)
return ctx['started'] return start
def _finish_frag_download(self, ctx, info_dict): def _finish_frag_download(self, ctx, info_dict):
ctx['dest_stream'].close() ctx['dest_stream'].close()
@@ -365,10 +375,10 @@ class FragmentFD(FileDownloader):
return decrypt_fragment return decrypt_fragment
def download_and_append_fragments_multiple(self, *args, **kwargs): def download_and_append_fragments_multiple(self, *args, **kwargs):
""" '''
@params (ctx1, fragments1, info_dict1), (ctx2, fragments2, info_dict2), ... @params (ctx1, fragments1, info_dict1), (ctx2, fragments2, info_dict2), ...
all args must be either tuple or list all args must be either tuple or list
""" '''
interrupt_trigger = [True] interrupt_trigger = [True]
max_progress = len(args) max_progress = len(args)
if max_progress == 1: if max_progress == 1:
@@ -389,7 +399,7 @@ class FragmentFD(FileDownloader):
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
pass pass
if os.name == 'nt': if compat_os_name == 'nt':
def future_result(future): def future_result(future):
while True: while True:
try: try:
@@ -423,7 +433,7 @@ class FragmentFD(FileDownloader):
finally: finally:
tpe.shutdown(wait=True) tpe.shutdown(wait=True)
if not interrupt_trigger[0] and not is_live: if not interrupt_trigger[0] and not is_live:
raise KeyboardInterrupt raise KeyboardInterrupt()
# we expect the user wants to stop and DO WANT the preceding postprocessors to run; # we expect the user wants to stop and DO WANT the preceding postprocessors to run;
# so returning a intermediate result here instead of KeyboardInterrupt on live # so returning a intermediate result here instead of KeyboardInterrupt on live
return result return result
@@ -490,6 +500,7 @@ class FragmentFD(FileDownloader):
download_fragment(fragment, ctx_copy) download_fragment(fragment, ctx_copy)
return fragment, fragment['frag_index'], ctx_copy.get('fragment_filename_sanitized') return fragment, fragment['frag_index'], ctx_copy.get('fragment_filename_sanitized')
self.report_warning('The download speed shown is only of one thread. This is a known issue')
with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool: with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool:
try: try:
for fragment, frag_index, frag_filename in pool.map(_download_fragment, fragments): for fragment, frag_index, frag_filename in pool.map(_download_fragment, fragments):

View File

@@ -16,7 +16,6 @@ from ..utils import (
update_url_query, update_url_query,
urljoin, urljoin,
) )
from ..utils._utils import _request_dump_filename
class HlsFD(FragmentFD): class HlsFD(FragmentFD):
@@ -73,23 +72,11 @@ class HlsFD(FragmentFD):
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
man_url = info_dict['url'] man_url = info_dict['url']
self.to_screen('[%s] Downloading m3u8 manifest' % self.FD_NAME)
s = info_dict.get('hls_media_playlist_data') urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
if s: man_url = urlh.url
self.to_screen(f'[{self.FD_NAME}] Using m3u8 manifest from extracted info') s = urlh.read().decode('utf-8', 'ignore')
else:
self.to_screen(f'[{self.FD_NAME}] Downloading m3u8 manifest')
urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
man_url = urlh.url
s_bytes = urlh.read()
if self.params.get('write_pages'):
dump_filename = _request_dump_filename(
man_url, info_dict['id'], None,
trim_length=self.params.get('trim_file_name'))
self.to_screen(f'[{self.FD_NAME}] Saving request to {dump_filename}')
with open(dump_filename, 'wb') as outf:
outf.write(s_bytes)
s = s_bytes.decode('utf-8', 'ignore')
can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
if can_download: if can_download:
@@ -132,12 +119,12 @@ class HlsFD(FragmentFD):
self.to_screen(f'[{self.FD_NAME}] Fragment downloads will be delegated to {real_downloader.get_basename()}') self.to_screen(f'[{self.FD_NAME}] Fragment downloads will be delegated to {real_downloader.get_basename()}')
def is_ad_fragment_start(s): def is_ad_fragment_start(s):
return ((s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s) return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s
or (s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad'))) or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad'))
def is_ad_fragment_end(s): def is_ad_fragment_end(s):
return ((s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s) return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s
or (s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment'))) or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment'))
fragments = [] fragments = []
@@ -173,12 +160,10 @@ class HlsFD(FragmentFD):
extra_state = ctx.setdefault('extra_state', {}) extra_state = ctx.setdefault('extra_state', {})
format_index = info_dict.get('format_index') format_index = info_dict.get('format_index')
extra_segment_query = None extra_query = None
if extra_param_to_segment_url := info_dict.get('extra_param_to_segment_url'): extra_param_to_segment_url = info_dict.get('extra_param_to_segment_url')
extra_segment_query = urllib.parse.parse_qs(extra_param_to_segment_url) if extra_param_to_segment_url:
extra_key_query = None extra_query = urllib.parse.parse_qs(extra_param_to_segment_url)
if extra_param_to_key_url := info_dict.get('extra_param_to_key_url'):
extra_key_query = urllib.parse.parse_qs(extra_param_to_key_url)
i = 0 i = 0
media_sequence = 0 media_sequence = 0
decrypt_info = {'METHOD': 'NONE'} decrypt_info = {'METHOD': 'NONE'}
@@ -190,7 +175,6 @@ class HlsFD(FragmentFD):
if external_aes_iv: if external_aes_iv:
external_aes_iv = binascii.unhexlify(remove_start(external_aes_iv, '0x').zfill(32)) external_aes_iv = binascii.unhexlify(remove_start(external_aes_iv, '0x').zfill(32))
byte_range = {} byte_range = {}
byte_range_offset = 0
discontinuity_count = 0 discontinuity_count = 0
frag_index = 0 frag_index = 0
ad_frag_next = False ad_frag_next = False
@@ -206,8 +190,8 @@ class HlsFD(FragmentFD):
if frag_index <= ctx['fragment_index']: if frag_index <= ctx['fragment_index']:
continue continue
frag_url = urljoin(man_url, line) frag_url = urljoin(man_url, line)
if extra_segment_query: if extra_query:
frag_url = update_url_query(frag_url, extra_segment_query) frag_url = update_url_query(frag_url, extra_query)
fragments.append({ fragments.append({
'frag_index': frag_index, 'frag_index': frag_index,
@@ -218,11 +202,6 @@ class HlsFD(FragmentFD):
}) })
media_sequence += 1 media_sequence += 1
# If the byte_range is truthy, reset it after appending a fragment that uses it
if byte_range:
byte_range_offset = byte_range['end']
byte_range = {}
elif line.startswith('#EXT-X-MAP'): elif line.startswith('#EXT-X-MAP'):
if format_index and discontinuity_count != format_index: if format_index and discontinuity_count != format_index:
continue continue
@@ -233,15 +212,13 @@ class HlsFD(FragmentFD):
frag_index += 1 frag_index += 1
map_info = parse_m3u8_attributes(line[11:]) map_info = parse_m3u8_attributes(line[11:])
frag_url = urljoin(man_url, map_info.get('URI')) frag_url = urljoin(man_url, map_info.get('URI'))
if extra_segment_query: if extra_query:
frag_url = update_url_query(frag_url, extra_segment_query) frag_url = update_url_query(frag_url, extra_query)
map_byte_range = {}
if map_info.get('BYTERANGE'): if map_info.get('BYTERANGE'):
splitted_byte_range = map_info.get('BYTERANGE').split('@') splitted_byte_range = map_info.get('BYTERANGE').split('@')
sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else 0 sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range['end']
map_byte_range = { byte_range = {
'start': sub_range_start, 'start': sub_range_start,
'end': sub_range_start + int(splitted_byte_range[0]), 'end': sub_range_start + int(splitted_byte_range[0]),
} }
@@ -250,8 +227,8 @@ class HlsFD(FragmentFD):
'frag_index': frag_index, 'frag_index': frag_index,
'url': frag_url, 'url': frag_url,
'decrypt_info': decrypt_info, 'decrypt_info': decrypt_info,
'byte_range': map_byte_range, 'byte_range': byte_range,
'media_sequence': media_sequence, 'media_sequence': media_sequence
}) })
media_sequence += 1 media_sequence += 1
@@ -267,10 +244,8 @@ class HlsFD(FragmentFD):
decrypt_info['KEY'] = external_aes_key decrypt_info['KEY'] = external_aes_key
else: else:
decrypt_info['URI'] = urljoin(man_url, decrypt_info['URI']) decrypt_info['URI'] = urljoin(man_url, decrypt_info['URI'])
if extra_key_query or extra_segment_query: if extra_query:
# Fall back to extra_segment_query to key for backwards compat decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)
decrypt_info['URI'] = update_url_query(
decrypt_info['URI'], extra_key_query or extra_segment_query)
if decrypt_url != decrypt_info['URI']: if decrypt_url != decrypt_info['URI']:
decrypt_info['KEY'] = None decrypt_info['KEY'] = None
@@ -278,7 +253,7 @@ class HlsFD(FragmentFD):
media_sequence = int(line[22:]) media_sequence = int(line[22:])
elif line.startswith('#EXT-X-BYTERANGE'): elif line.startswith('#EXT-X-BYTERANGE'):
splitted_byte_range = line[17:].split('@') splitted_byte_range = line[17:].split('@')
sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range_offset sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range['end']
byte_range = { byte_range = {
'start': sub_range_start, 'start': sub_range_start,
'end': sub_range_start + int(splitted_byte_range[0]), 'end': sub_range_start + int(splitted_byte_range[0]),
@@ -375,8 +350,9 @@ class HlsFD(FragmentFD):
# XXX: this should probably be silent as well # XXX: this should probably be silent as well
# or verify that all segments contain the same data # or verify that all segments contain the same data
self.report_warning(bug_reports_message( self.report_warning(bug_reports_message(
f'Discarding a {type(block).__name__} block found in the middle of the stream; ' 'Discarding a %s block found in the middle of the stream; '
'if the subtitles display incorrectly,')) 'if the subtitles display incorrectly,'
% (type(block).__name__)))
continue continue
block.write_into(output) block.write_into(output)
@@ -393,10 +369,7 @@ class HlsFD(FragmentFD):
return output.getvalue().encode() return output.getvalue().encode()
if len(fragments) == 1: self.download_and_append_fragments(
self.download_and_append_fragments(ctx, fragments, info_dict) ctx, fragments, info_dict, pack_func=pack_fragment, finish_func=fin_fragments)
else:
self.download_and_append_fragments(
ctx, fragments, info_dict, pack_func=pack_fragment, finish_func=fin_fragments)
else: else:
return self.download_and_append_fragments(ctx, fragments, info_dict) return self.download_and_append_fragments(ctx, fragments, info_dict)

View File

@@ -15,6 +15,7 @@ from ..utils import (
ThrottledDownload, ThrottledDownload,
XAttrMetadataError, XAttrMetadataError,
XAttrUnavailableError, XAttrUnavailableError,
encodeFilename,
int_or_none, int_or_none,
parse_http_range, parse_http_range,
try_call, try_call,
@@ -57,8 +58,9 @@ class HttpFD(FileDownloader):
if self.params.get('continuedl', True): if self.params.get('continuedl', True):
# Establish possible resume length # Establish possible resume length
if os.path.isfile(ctx.tmpfilename): if os.path.isfile(encodeFilename(ctx.tmpfilename)):
ctx.resume_len = os.path.getsize(ctx.tmpfilename) ctx.resume_len = os.path.getsize(
encodeFilename(ctx.tmpfilename))
ctx.is_resume = ctx.resume_len > 0 ctx.is_resume = ctx.resume_len > 0
@@ -174,7 +176,7 @@ class HttpFD(FileDownloader):
'downloaded_bytes': ctx.resume_len, 'downloaded_bytes': ctx.resume_len,
'total_bytes': ctx.resume_len, 'total_bytes': ctx.resume_len,
}, info_dict) }, info_dict)
raise SucceedDownload raise SucceedDownload()
else: else:
# The length does not match, we start the download over # The length does not match, we start the download over
self.report_unable_to_resume() self.report_unable_to_resume()
@@ -192,7 +194,7 @@ class HttpFD(FileDownloader):
def close_stream(): def close_stream():
if ctx.stream is not None: if ctx.stream is not None:
if ctx.tmpfilename != '-': if not ctx.tmpfilename == '-':
ctx.stream.close() ctx.stream.close()
ctx.stream = None ctx.stream = None
@@ -235,13 +237,8 @@ class HttpFD(FileDownloader):
def retry(e): def retry(e):
close_stream() close_stream()
if ctx.tmpfilename == '-': ctx.resume_len = (byte_counter if ctx.tmpfilename == '-'
ctx.resume_len = byte_counter else os.path.getsize(encodeFilename(ctx.tmpfilename)))
else:
try:
ctx.resume_len = os.path.getsize(ctx.tmpfilename)
except FileNotFoundError:
ctx.resume_len = 0
raise RetryDownload(e) raise RetryDownload(e)
while True: while True:
@@ -266,20 +263,20 @@ class HttpFD(FileDownloader):
ctx.filename = self.undo_temp_name(ctx.tmpfilename) ctx.filename = self.undo_temp_name(ctx.tmpfilename)
self.report_destination(ctx.filename) self.report_destination(ctx.filename)
except OSError as err: except OSError as err:
self.report_error(f'unable to open for writing: {err}') self.report_error('unable to open for writing: %s' % str(err))
return False return False
if self.params.get('xattr_set_filesize', False) and data_len is not None: if self.params.get('xattr_set_filesize', False) and data_len is not None:
try: try:
write_xattr(ctx.tmpfilename, 'user.ytdl.filesize', str(data_len).encode()) write_xattr(ctx.tmpfilename, 'user.ytdl.filesize', str(data_len).encode())
except (XAttrUnavailableError, XAttrMetadataError) as err: except (XAttrUnavailableError, XAttrMetadataError) as err:
self.report_error(f'unable to set filesize xattr: {err}') self.report_error('unable to set filesize xattr: %s' % str(err))
try: try:
ctx.stream.write(data_block) ctx.stream.write(data_block)
except OSError as err: except OSError as err:
self.to_stderr('\n') self.to_stderr('\n')
self.report_error(f'unable to write data: {err}') self.report_error('unable to write data: %s' % str(err))
return False return False
# Apply rate limit # Apply rate limit
@@ -325,7 +322,7 @@ class HttpFD(FileDownloader):
elif now - ctx.throttle_start > 3: elif now - ctx.throttle_start > 3:
if ctx.stream is not None and ctx.tmpfilename != '-': if ctx.stream is not None and ctx.tmpfilename != '-':
ctx.stream.close() ctx.stream.close()
raise ThrottledDownload raise ThrottledDownload()
elif speed: elif speed:
ctx.throttle_start = None ctx.throttle_start = None
@@ -336,7 +333,7 @@ class HttpFD(FileDownloader):
if not is_test and ctx.chunk_size and ctx.content_len is not None and byte_counter < ctx.content_len: if not is_test and ctx.chunk_size and ctx.content_len is not None and byte_counter < ctx.content_len:
ctx.resume_len = byte_counter ctx.resume_len = byte_counter
raise NextFragment raise NextFragment()
if ctx.tmpfilename != '-': if ctx.tmpfilename != '-':
ctx.stream.close() ctx.stream.close()

View File

@@ -251,7 +251,7 @@ class IsmFD(FragmentFD):
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True) skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
frag_index = 0 frag_index = 0
for segment in segments: for i, segment in enumerate(segments):
frag_index += 1 frag_index += 1
if frag_index <= ctx['fragment_index']: if frag_index <= ctx['fragment_index']:
continue continue

View File

@@ -10,7 +10,7 @@ from ..version import __version__ as YT_DLP_VERSION
class MhtmlFD(FragmentFD): class MhtmlFD(FragmentFD):
_STYLESHEET = '''\ _STYLESHEET = """\
html, body { html, body {
margin: 0; margin: 0;
padding: 0; padding: 0;
@@ -45,7 +45,7 @@ body > figure > img {
max-width: 100%; max-width: 100%;
max-height: calc(100vh - 5em); max-height: calc(100vh - 5em);
} }
''' """
_STYLESHEET = re.sub(r'\s+', ' ', _STYLESHEET) _STYLESHEET = re.sub(r'\s+', ' ', _STYLESHEET)
_STYLESHEET = re.sub(r'\B \B|(?<=[\w\-]) (?=[^\w\-])|(?<=[^\w\-]) (?=[\w\-])', '', _STYLESHEET) _STYLESHEET = re.sub(r'\B \B|(?<=[\w\-]) (?=[^\w\-])|(?<=[^\w\-]) (?=[\w\-])', '', _STYLESHEET)
@@ -57,19 +57,24 @@ body > figure > img {
)).decode('us-ascii') + '?=' )).decode('us-ascii') + '?='
def _gen_cid(self, i, fragment, frag_boundary): def _gen_cid(self, i, fragment, frag_boundary):
return f'{i}.{frag_boundary}@yt-dlp.github.io.invalid' return '%u.%s@yt-dlp.github.io.invalid' % (i, frag_boundary)
def _gen_stub(self, *, fragments, frag_boundary, title): def _gen_stub(self, *, fragments, frag_boundary, title):
output = io.StringIO() output = io.StringIO()
output.write( output.write((
'<!DOCTYPE html>' '<!DOCTYPE html>'
'<html>' '<html>'
'<head>' '<head>'
f'<meta name="generator" content="yt-dlp {escapeHTML(YT_DLP_VERSION)}">' '' '<meta name="generator" content="yt-dlp {version}">'
f'<title>{escapeHTML(title)}</title>' '' '<title>{title}</title>'
f'<style>{self._STYLESHEET}</style>' '' '<style>{styles}</style>'
'<body>') '<body>'
).format(
version=escapeHTML(YT_DLP_VERSION),
styles=self._STYLESHEET,
title=escapeHTML(title)
))
t0 = 0 t0 = 0
for i, frag in enumerate(fragments): for i, frag in enumerate(fragments):
@@ -82,12 +87,15 @@ body > figure > img {
num=i + 1, num=i + 1,
t0=srt_subtitles_timecode(t0), t0=srt_subtitles_timecode(t0),
t1=srt_subtitles_timecode(t1), t1=srt_subtitles_timecode(t1),
duration=formatSeconds(frag['duration'], msec=True), duration=formatSeconds(frag['duration'], msec=True)
)) ))
except (KeyError, ValueError, TypeError): except (KeyError, ValueError, TypeError):
t1 = None t1 = None
output.write(f'<figcaption>Slide #{i + 1}</figcaption>') output.write((
output.write(f'<img src="cid:{self._gen_cid(i, frag, frag_boundary)}">') '<figcaption>Slide #{num}</figcaption>'
).format(num=i + 1))
output.write('<img src="cid:{cid}">'.format(
cid=self._gen_cid(i, frag, frag_boundary)))
output.write('</figure>') output.write('</figure>')
t0 = t1 t0 = t1
@@ -118,24 +126,31 @@ body > figure > img {
stub = self._gen_stub( stub = self._gen_stub(
fragments=fragments, fragments=fragments,
frag_boundary=frag_boundary, frag_boundary=frag_boundary,
title=title, title=title
) )
ctx['dest_stream'].write(( ctx['dest_stream'].write((
'MIME-Version: 1.0\r\n' 'MIME-Version: 1.0\r\n'
'From: <nowhere@yt-dlp.github.io.invalid>\r\n' 'From: <nowhere@yt-dlp.github.io.invalid>\r\n'
'To: <nowhere@yt-dlp.github.io.invalid>\r\n' 'To: <nowhere@yt-dlp.github.io.invalid>\r\n'
f'Subject: {self._escape_mime(title)}\r\n' 'Subject: {title}\r\n'
'Content-type: multipart/related; ' 'Content-type: multipart/related; '
f'boundary="{frag_boundary}"; ' '' 'boundary="{boundary}"; '
'type="text/html"\r\n' '' 'type="text/html"\r\n'
f'X.yt-dlp.Origin: {origin}\r\n' 'X.yt-dlp.Origin: {origin}\r\n'
'\r\n' '\r\n'
f'--{frag_boundary}\r\n' '--{boundary}\r\n'
'Content-Type: text/html; charset=utf-8\r\n' 'Content-Type: text/html; charset=utf-8\r\n'
f'Content-Length: {len(stub)}\r\n' 'Content-Length: {length}\r\n'
'\r\n' '\r\n'
f'{stub}\r\n').encode()) '{stub}\r\n'
).format(
origin=origin,
boundary=frag_boundary,
length=len(stub),
title=self._escape_mime(title),
stub=stub
).encode())
extra_state['header_written'] = True extra_state['header_written'] = True
for i, fragment in enumerate(fragments): for i, fragment in enumerate(fragments):

View File

@@ -2,10 +2,58 @@ import json
import threading import threading
import time import time
from . import get_suitable_downloader
from .common import FileDownloader from .common import FileDownloader
from .external import FFmpegFD from .external import FFmpegFD
from ..networking import Request from ..networking import Request
from ..utils import DownloadError, str_or_none, try_get from ..utils import DownloadError, WebSocketsWrapper, str_or_none, try_get
class NiconicoDmcFD(FileDownloader):
""" Downloading niconico douga from DMC with heartbeat """
def real_download(self, filename, info_dict):
from ..extractor.niconico import NiconicoIE
self.to_screen('[%s] Downloading from DMC' % self.FD_NAME)
ie = NiconicoIE(self.ydl)
info_dict, heartbeat_info_dict = ie._get_heartbeat_info(info_dict)
fd = get_suitable_downloader(info_dict, params=self.params)(self.ydl, self.params)
success = download_complete = False
timer = [None]
heartbeat_lock = threading.Lock()
heartbeat_url = heartbeat_info_dict['url']
heartbeat_data = heartbeat_info_dict['data'].encode()
heartbeat_interval = heartbeat_info_dict.get('interval', 30)
request = Request(heartbeat_url, heartbeat_data)
def heartbeat():
try:
self.ydl.urlopen(request).read()
except Exception:
self.to_screen('[%s] Heartbeat failed' % self.FD_NAME)
with heartbeat_lock:
if not download_complete:
timer[0] = threading.Timer(heartbeat_interval, heartbeat)
timer[0].start()
heartbeat_info_dict['ping']()
self.to_screen('[%s] Heartbeat with %d second interval ...' % (self.FD_NAME, heartbeat_interval))
try:
heartbeat()
if type(fd).__name__ == 'HlsFD':
info_dict.update(ie._extract_m3u8_formats(info_dict['url'], info_dict['id'])[0])
success = fd.real_download(filename, info_dict)
finally:
if heartbeat_lock:
with heartbeat_lock:
timer[0].cancel()
download_complete = True
return success
class NiconicoLiveFD(FileDownloader): class NiconicoLiveFD(FileDownloader):
@@ -16,6 +64,7 @@ class NiconicoLiveFD(FileDownloader):
ws_url = info_dict['url'] ws_url = info_dict['url']
ws_extractor = info_dict['ws'] ws_extractor = info_dict['ws']
ws_origin_host = info_dict['origin'] ws_origin_host = info_dict['origin']
cookies = info_dict.get('cookies')
live_quality = info_dict.get('live_quality', 'high') live_quality = info_dict.get('live_quality', 'high')
live_latency = info_dict.get('live_latency', 'high') live_latency = info_dict.get('live_latency', 'high')
dl = FFmpegFD(self.ydl, self.params or {}) dl = FFmpegFD(self.ydl, self.params or {})
@@ -27,7 +76,12 @@ class NiconicoLiveFD(FileDownloader):
def communicate_ws(reconnect): def communicate_ws(reconnect):
if reconnect: if reconnect:
ws = self.ydl.urlopen(Request(ws_url, headers={'Origin': f'https://{ws_origin_host}'})) ws = WebSocketsWrapper(ws_url, {
'Cookies': str_or_none(cookies) or '',
'Origin': f'https://{ws_origin_host}',
'Accept': '*/*',
'User-Agent': self.params['http_headers']['User-Agent'],
})
if self.ydl.params.get('verbose', False): if self.ydl.params.get('verbose', False):
self.to_screen('[debug] Sending startWatching request') self.to_screen('[debug] Sending startWatching request')
ws.send(json.dumps({ ws.send(json.dumps({
@@ -37,15 +91,14 @@ class NiconicoLiveFD(FileDownloader):
'quality': live_quality, 'quality': live_quality,
'protocol': 'hls+fmp4', 'protocol': 'hls+fmp4',
'latency': live_latency, 'latency': live_latency,
'accessRightMethod': 'single_cookie', 'chasePlay': False
'chasePlay': False,
}, },
'room': { 'room': {
'protocol': 'webSocket', 'protocol': 'webSocket',
'commentable': True, 'commentable': True
}, },
'reconnect': True, 'reconnect': True,
}, }
})) }))
else: else:
ws = ws_extractor ws = ws_extractor
@@ -71,7 +124,7 @@ class NiconicoLiveFD(FileDownloader):
elif self.ydl.params.get('verbose', False): elif self.ydl.params.get('verbose', False):
if len(recv) > 100: if len(recv) > 100:
recv = recv[:100] + '...' recv = recv[:100] + '...'
self.to_screen(f'[debug] Server said: {recv}') self.to_screen('[debug] Server said: %s' % recv)
def ws_main(): def ws_main():
reconnect = False reconnect = False
@@ -81,7 +134,7 @@ class NiconicoLiveFD(FileDownloader):
if ret is True: if ret is True:
return return
except BaseException as e: except BaseException as e:
self.to_screen('[{}] {}: Connection error occured, reconnecting after 10 seconds: {}'.format('niconico:live', video_id, str_or_none(e))) self.to_screen('[%s] %s: Connection error occured, reconnecting after 10 seconds: %s' % ('niconico:live', video_id, str_or_none(e)))
time.sleep(10) time.sleep(10)
continue continue
finally: finally:

View File

@@ -8,6 +8,7 @@ from ..utils import (
Popen, Popen,
check_executable, check_executable,
encodeArgument, encodeArgument,
encodeFilename,
get_exe_version, get_exe_version,
) )
@@ -178,15 +179,15 @@ class RtmpFD(FileDownloader):
return False return False
while retval in (RD_INCOMPLETE, RD_FAILED) and not test and not live: while retval in (RD_INCOMPLETE, RD_FAILED) and not test and not live:
prevsize = os.path.getsize(tmpfilename) prevsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(f'[rtmpdump] Downloaded {prevsize} bytes') self.to_screen('[rtmpdump] Downloaded %s bytes' % prevsize)
time.sleep(5.0) # This seems to be needed time.sleep(5.0) # This seems to be needed
args = [*basic_args, '--resume'] args = basic_args + ['--resume']
if retval == RD_FAILED: if retval == RD_FAILED:
args += ['--skip', '1'] args += ['--skip', '1']
args = [encodeArgument(a) for a in args] args = [encodeArgument(a) for a in args]
retval = run_rtmpdump(args) retval = run_rtmpdump(args)
cursize = os.path.getsize(tmpfilename) cursize = os.path.getsize(encodeFilename(tmpfilename))
if prevsize == cursize and retval == RD_FAILED: if prevsize == cursize and retval == RD_FAILED:
break break
# Some rtmp streams seem abort after ~ 99.8%. Don't complain for those # Some rtmp streams seem abort after ~ 99.8%. Don't complain for those
@@ -195,8 +196,8 @@ class RtmpFD(FileDownloader):
retval = RD_SUCCESS retval = RD_SUCCESS
break break
if retval == RD_SUCCESS or (test and retval == RD_INCOMPLETE): if retval == RD_SUCCESS or (test and retval == RD_INCOMPLETE):
fsize = os.path.getsize(tmpfilename) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(f'[rtmpdump] Downloaded {fsize} bytes') self.to_screen('[rtmpdump] Downloaded %s bytes' % fsize)
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
self._hook_progress({ self._hook_progress({
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,

View File

@@ -2,7 +2,7 @@ import os
import subprocess import subprocess
from .common import FileDownloader from .common import FileDownloader
from ..utils import check_executable from ..utils import check_executable, encodeFilename
class RtspFD(FileDownloader): class RtspFD(FileDownloader):
@@ -26,7 +26,7 @@ class RtspFD(FileDownloader):
retval = subprocess.call(args) retval = subprocess.call(args)
if retval == 0: if retval == 0:
fsize = os.path.getsize(tmpfilename) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(f'\r[{args[0]}] {fsize} bytes') self.to_screen(f'\r[{args[0]}] {fsize} bytes')
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
self._hook_progress({ self._hook_progress({

View File

@@ -18,7 +18,7 @@ class YoutubeLiveChatFD(FragmentFD):
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
video_id = info_dict['video_id'] video_id = info_dict['video_id']
self.to_screen(f'[{self.FD_NAME}] Downloading live chat') self.to_screen('[%s] Downloading live chat' % self.FD_NAME)
if not self.params.get('skip_download') and info_dict['protocol'] == 'youtube_live_chat': if not self.params.get('skip_download') and info_dict['protocol'] == 'youtube_live_chat':
self.report_warning('Live chat download runs until the livestream ends. ' self.report_warning('Live chat download runs until the livestream ends. '
'If you wish to download the video simultaneously, run a separate yt-dlp instance') 'If you wish to download the video simultaneously, run a separate yt-dlp instance')
@@ -123,8 +123,8 @@ class YoutubeLiveChatFD(FragmentFD):
data, data,
lambda x: x['continuationContents']['liveChatContinuation'], dict) or {} lambda x: x['continuationContents']['liveChatContinuation'], dict) or {}
func = ((info_dict['protocol'] == 'youtube_live_chat' and parse_actions_live) func = (info_dict['protocol'] == 'youtube_live_chat' and parse_actions_live
or (frag_index == 1 and try_refresh_replay_beginning) or frag_index == 1 and try_refresh_replay_beginning
or parse_actions_replay) or parse_actions_replay)
return (True, *func(live_chat_continuation)) return (True, *func(live_chat_continuation))
except HTTPError as err: except HTTPError as err:

View File

@@ -1,25 +1,16 @@
from ..compat.compat_utils import passthrough_module from ..compat.compat_utils import passthrough_module
from ..globals import extractors as _extractors_context
from ..globals import plugin_ies as _plugin_ies_context
from ..plugins import PluginSpec, register_plugin_spec
passthrough_module(__name__, '.extractors') passthrough_module(__name__, '.extractors')
del passthrough_module del passthrough_module
register_plugin_spec(PluginSpec(
module_name='extractor',
suffix='IE',
destination=_extractors_context,
plugin_destination=_plugin_ies_context,
))
def gen_extractor_classes(): def gen_extractor_classes():
""" Return a list of supported extractors. """ Return a list of supported extractors.
The order does matter; the first extractor matched is the one handling the URL. The order does matter; the first extractor matched is the one handling the URL.
""" """
import_extractors() from .extractors import _ALL_CLASSES
return list(_extractors_context.value.values())
return _ALL_CLASSES
def gen_extractors(): def gen_extractors():
@@ -46,9 +37,6 @@ def list_extractors(age_limit=None):
def get_info_extractor(ie_name): def get_info_extractor(ie_name):
"""Returns the info extractor class with the given ie_name""" """Returns the info extractor class with the given ie_name"""
import_extractors() from . import extractors
return _extractors_context.value[f'{ie_name}IE']
return getattr(extractors, f'{ie_name}IE')
def import_extractors():
from . import extractors # noqa: F401

File diff suppressed because it is too large Load Diff

View File

@@ -4,18 +4,18 @@ import re
import time import time
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
ExtractorError,
dict_get, dict_get,
int_or_none, ExtractorError,
js_to_json, js_to_json,
int_or_none,
parse_iso8601, parse_iso8601,
str_or_none, str_or_none,
traverse_obj, traverse_obj,
try_get, try_get,
unescapeHTML, unescapeHTML,
update_url_query, update_url_query,
url_or_none,
) )
@@ -66,7 +66,7 @@ class ABCIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'WWI Centenary', 'title': 'WWI Centenary',
'description': 'md5:c2379ec0ca84072e86b446e536954546', 'description': 'md5:c2379ec0ca84072e86b446e536954546',
}, }
}, { }, {
'url': 'https://www.abc.net.au/news/programs/the-world/2020-06-10/black-lives-matter-protests-spawn-support-for/12342074', 'url': 'https://www.abc.net.au/news/programs/the-world/2020-06-10/black-lives-matter-protests-spawn-support-for/12342074',
'info_dict': { 'info_dict': {
@@ -74,7 +74,7 @@ class ABCIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Black Lives Matter protests spawn support for Papuans in Indonesia', 'title': 'Black Lives Matter protests spawn support for Papuans in Indonesia',
'description': 'md5:2961a17dc53abc558589ccd0fb8edd6f', 'description': 'md5:2961a17dc53abc558589ccd0fb8edd6f',
}, }
}, { }, {
'url': 'https://www.abc.net.au/btn/newsbreak/btn-newsbreak-20200814/12560476', 'url': 'https://www.abc.net.au/btn/newsbreak/btn-newsbreak-20200814/12560476',
'info_dict': { 'info_dict': {
@@ -85,7 +85,7 @@ class ABCIE(InfoExtractor):
'upload_date': '20200813', 'upload_date': '20200813',
'uploader': 'Behind the News', 'uploader': 'Behind the News',
'uploader_id': 'behindthenews', 'uploader_id': 'behindthenews',
}, }
}, { }, {
'url': 'https://www.abc.net.au/news/2023-06-25/wagner-boss-orders-troops-back-to-bases-to-avoid-bloodshed/102520540', 'url': 'https://www.abc.net.au/news/2023-06-25/wagner-boss-orders-troops-back-to-bases-to-avoid-bloodshed/102520540',
'info_dict': { 'info_dict': {
@@ -94,7 +94,7 @@ class ABCIE(InfoExtractor):
'ext': 'mp4', 'ext': 'mp4',
'description': 'Wagner troops leave Rostov-on-Don and\xa0Yevgeny Prigozhin will move to Belarus under a deal brokered by Belarusian President Alexander Lukashenko to end the mutiny.', 'description': 'Wagner troops leave Rostov-on-Don and\xa0Yevgeny Prigozhin will move to Belarus under a deal brokered by Belarusian President Alexander Lukashenko to end the mutiny.',
'thumbnail': 'https://live-production.wcms.abc-cdn.net.au/0c170f5b57f0105c432f366c0e8e267b?impolicy=wcms_crop_resize&cropH=2813&cropW=5000&xPos=0&yPos=249&width=862&height=485', 'thumbnail': 'https://live-production.wcms.abc-cdn.net.au/0c170f5b57f0105c432f366c0e8e267b?impolicy=wcms_crop_resize&cropH=2813&cropW=5000&xPos=0&yPos=249&width=862&height=485',
}, }
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -125,7 +125,7 @@ class ABCIE(InfoExtractor):
if mobj is None: if mobj is None:
expired = self._html_search_regex(r'(?s)class="expired-(?:video|audio)".+?<span>(.+?)</span>', webpage, 'expired', None) expired = self._html_search_regex(r'(?s)class="expired-(?:video|audio)".+?<span>(.+?)</span>', webpage, 'expired', None)
if expired: if expired:
raise ExtractorError(f'{self.IE_NAME} said: {expired}', expected=True) raise ExtractorError('%s said: %s' % (self.IE_NAME, expired), expected=True)
raise ExtractorError('Unable to extract video urls') raise ExtractorError('Unable to extract video urls')
urls_info = self._parse_json( urls_info = self._parse_json(
@@ -163,7 +163,7 @@ class ABCIE(InfoExtractor):
'height': height, 'height': height,
'tbr': bitrate, 'tbr': bitrate,
'filesize': int_or_none(url_info.get('filesize')), 'filesize': int_or_none(url_info.get('filesize')),
'format_id': format_id, 'format_id': format_id
}) })
return { return {
@@ -180,100 +180,20 @@ class ABCIViewIE(InfoExtractor):
_VALID_URL = r'https?://iview\.abc\.net\.au/(?:[^/]+/)*video/(?P<id>[^/?#]+)' _VALID_URL = r'https?://iview\.abc\.net\.au/(?:[^/]+/)*video/(?P<id>[^/?#]+)'
_GEO_COUNTRIES = ['AU'] _GEO_COUNTRIES = ['AU']
# ABC iview programs are normally available for 14 days only.
_TESTS = [{ _TESTS = [{
'url': 'https://iview.abc.net.au/show/utopia/series/1/video/CO1211V001S00',
'md5': '52a942bfd7a0b79a6bfe9b4ce6c9d0ed',
'info_dict': {
'id': 'CO1211V001S00',
'ext': 'mp4',
'title': 'Series 1 Ep 1 Wood For The Trees',
'series': 'Utopia',
'description': 'md5:0cfb2c183c1b952d1548fd65c8a95c00',
'upload_date': '20230726',
'uploader_id': 'abc1',
'series_id': 'CO1211V',
'episode_id': 'CO1211V001S00',
'season_number': 1,
'season': 'Season 1',
'episode_number': 1,
'episode': 'Wood For The Trees',
'thumbnail': 'https://cdn.iview.abc.net.au/thumbs/i/co/CO1211V001S00_5ad8353f4df09_1280.jpg',
'timestamp': 1690403700,
},
'params': {
'skip_download': True,
},
}, {
'note': 'No episode name',
'url': 'https://iview.abc.net.au/show/gruen/series/11/video/LE1927H001S00', 'url': 'https://iview.abc.net.au/show/gruen/series/11/video/LE1927H001S00',
'md5': '67715ce3c78426b11ba167d875ac6abf', 'md5': '67715ce3c78426b11ba167d875ac6abf',
'info_dict': { 'info_dict': {
'id': 'LE1927H001S00', 'id': 'LE1927H001S00',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Series 11 Ep 1', 'title': "Series 11 Ep 1",
'series': 'Gruen', 'series': "Gruen",
'description': 'md5:52cc744ad35045baf6aded2ce7287f67', 'description': 'md5:52cc744ad35045baf6aded2ce7287f67',
'upload_date': '20190925', 'upload_date': '20190925',
'uploader_id': 'abc1', 'uploader_id': 'abc1',
'series_id': 'LE1927H',
'episode_id': 'LE1927H001S00',
'season_number': 11,
'season': 'Season 11',
'episode_number': 1,
'episode': 'Episode 1',
'thumbnail': 'https://cdn.iview.abc.net.au/thumbs/i/le/LE1927H001S00_5d954fbd79e25_1280.jpg',
'timestamp': 1569445289, 'timestamp': 1569445289,
}, },
'expected_warnings': ['Ignoring subtitle tracks found in the HLS manifest'],
'params': {
'skip_download': True,
},
}, {
'note': 'No episode number',
'url': 'https://iview.abc.net.au/show/four-corners/series/2022/video/NC2203H039S00',
'md5': '77cb7d8434440e3b28fbebe331c2456a',
'info_dict': {
'id': 'NC2203H039S00',
'ext': 'mp4',
'title': 'Series 2022 Locking Up Kids',
'series': 'Four Corners',
'description': 'md5:54829ca108846d1a70e1fcce2853e720',
'upload_date': '20221114',
'uploader_id': 'abc1',
'series_id': 'NC2203H',
'episode_id': 'NC2203H039S00',
'season_number': 2022,
'season': 'Season 2022',
'episode': 'Locking Up Kids',
'thumbnail': 'https://cdn.iview.abc.net.au/thumbs/i/nc/NC2203H039S00_636d8a0944a22_1920.jpg',
'timestamp': 1668460497,
},
'expected_warnings': ['Ignoring subtitle tracks found in the HLS manifest'],
'params': {
'skip_download': True,
},
}, {
'note': 'No episode name or number',
'url': 'https://iview.abc.net.au/show/landline/series/2021/video/RF2004Q043S00',
'md5': '2e17dec06b13cc81dc119d2565289396',
'info_dict': {
'id': 'RF2004Q043S00',
'ext': 'mp4',
'title': 'Series 2021',
'series': 'Landline',
'description': 'md5:c9f30d9c0c914a7fd23842f6240be014',
'upload_date': '20211205',
'uploader_id': 'abc1',
'series_id': 'RF2004Q',
'episode_id': 'RF2004Q043S00',
'season_number': 2021,
'season': 'Season 2021',
'thumbnail': 'https://cdn.iview.abc.net.au/thumbs/i/rf/RF2004Q043S00_61a950639dbc0_1920.jpg',
'timestamp': 1638710705,
},
'expected_warnings': ['Ignoring subtitle tracks found in the HLS manifest'],
'params': { 'params': {
'skip_download': True, 'skip_download': True,
}, },
@@ -287,12 +207,13 @@ class ABCIViewIE(InfoExtractor):
stream = next(s for s in video_params['playlist'] if s.get('type') in ('program', 'livestream')) stream = next(s for s in video_params['playlist'] if s.get('type') in ('program', 'livestream'))
house_number = video_params.get('episodeHouseNumber') or video_id house_number = video_params.get('episodeHouseNumber') or video_id
path = f'/auth/hls/sign?ts={int(time.time())}&hn={house_number}&d=android-tablet' path = '/auth/hls/sign?ts={0}&hn={1}&d=android-tablet'.format(
int(time.time()), house_number)
sig = hmac.new( sig = hmac.new(
b'android.content.res.Resources', b'android.content.res.Resources',
path.encode(), hashlib.sha256).hexdigest() path.encode('utf-8'), hashlib.sha256).hexdigest()
token = self._download_webpage( token = self._download_webpage(
f'http://iview.abc.net.au{path}&sig={sig}', video_id) 'http://iview.abc.net.au{0}&sig={1}'.format(path, sig), video_id)
def tokenize_url(url, token): def tokenize_url(url, token):
return update_url_query(url, { return update_url_query(url, {
@@ -301,7 +222,7 @@ class ABCIViewIE(InfoExtractor):
for sd in ('1080', '720', 'sd', 'sd-low'): for sd in ('1080', '720', 'sd', 'sd-low'):
sd_url = try_get( sd_url = try_get(
stream, lambda x: x['streams']['hls'][sd], str) stream, lambda x: x['streams']['hls'][sd], compat_str)
if not sd_url: if not sd_url:
continue continue
formats = self._extract_m3u8_formats( formats = self._extract_m3u8_formats(
@@ -334,8 +255,6 @@ class ABCIViewIE(InfoExtractor):
'episode_number': int_or_none(self._search_regex( 'episode_number': int_or_none(self._search_regex(
r'\bEp\s+(\d+)\b', title, 'episode number', default=None)), r'\bEp\s+(\d+)\b', title, 'episode number', default=None)),
'episode_id': house_number, 'episode_id': house_number,
'episode': self._search_regex(
r'^(?:Series\s+\d+)?\s*(?:Ep\s+\d+)?\s*(.*)$', title, 'episode', default='') or None,
'uploader_id': video_params.get('channel'), 'uploader_id': video_params.get('channel'),
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
@@ -356,7 +275,7 @@ class ABCIViewShowSeriesIE(InfoExtractor):
'description': 'md5:93119346c24a7c322d446d8eece430ff', 'description': 'md5:93119346c24a7c322d446d8eece430ff',
'series': 'Upper Middle Bogan', 'series': 'Upper Middle Bogan',
'season': 'Series 1', 'season': 'Series 1',
'thumbnail': r're:^https?://cdn\.iview\.abc\.net\.au/thumbs/.*\.jpg$', 'thumbnail': r're:^https?://cdn\.iview\.abc\.net\.au/thumbs/.*\.jpg$'
}, },
'playlist_count': 8, 'playlist_count': 8,
}, { }, {
@@ -375,39 +294,17 @@ class ABCIViewShowSeriesIE(InfoExtractor):
'noplaylist': True, 'noplaylist': True,
'skip_download': 'm3u8', 'skip_download': 'm3u8',
}, },
}, {
# 'videoEpisodes' is a dict with `items` key
'url': 'https://iview.abc.net.au/show/7-30-mark-humphries-satire',
'info_dict': {
'id': '178458-0',
'title': 'Episodes',
'description': 'Satirist Mark Humphries brings his unique perspective on current political events for 7.30.',
'series': '7.30 Mark Humphries Satire',
'season': 'Episodes',
'thumbnail': r're:^https?://cdn\.iview\.abc\.net\.au/thumbs/.*\.jpg$',
},
'playlist_count': 15,
'skip': 'This program is not currently available in ABC iview',
}, {
'url': 'https://iview.abc.net.au/show/inbestigators',
'info_dict': {
'id': '175343-1',
'title': 'Series 1',
'description': 'md5:b9976935a6450e5b78ce2a940a755685',
'series': 'The Inbestigators',
'season': 'Series 1',
'thumbnail': r're:^https?://cdn\.iview\.abc\.net\.au/thumbs/.+\.jpg',
},
'playlist_count': 17,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
show_id = self._match_id(url) show_id = self._match_id(url)
webpage = self._download_webpage(url, show_id) webpage = self._download_webpage(url, show_id)
video_data = self._search_json( webpage_data = self._search_regex(
r'window\.__INITIAL_STATE__\s*=\s*[\'"]', webpage, 'initial state', show_id, r'window\.__INITIAL_STATE__\s*=\s*[\'"](.+?)[\'"]\s*;',
transform_source=lambda x: x.encode().decode('unicode_escape'), webpage, 'initial state')
end_pattern=r'[\'"]\s*;')['route']['pageData']['_embedded'] video_data = self._parse_json(
unescapeHTML(webpage_data).encode('utf-8').decode('unicode_escape'), show_id)
video_data = video_data['route']['pageData']['_embedded']
highlight = try_get(video_data, lambda x: x['highlightVideo']['shareUrl']) highlight = try_get(video_data, lambda x: x['highlightVideo']['shareUrl'])
if not self._yes_playlist(show_id, bool(highlight), video_label='highlight video'): if not self._yes_playlist(show_id, bool(highlight), video_label='highlight video'):
@@ -416,14 +313,12 @@ class ABCIViewShowSeriesIE(InfoExtractor):
series = video_data['selectedSeries'] series = video_data['selectedSeries']
return { return {
'_type': 'playlist', '_type': 'playlist',
'entries': [self.url_result(episode_url, ABCIViewIE) 'entries': [self.url_result(episode['shareUrl'])
for episode_url in traverse_obj(series, ( for episode in series['_embedded']['videoEpisodes']],
'_embedded', 'videoEpisodes', (None, 'items'), ..., 'shareUrl', {url_or_none}))],
'id': series.get('id'), 'id': series.get('id'),
'title': dict_get(series, ('title', 'displaySubtitle')), 'title': dict_get(series, ('title', 'displaySubtitle')),
'description': series.get('description'), 'description': series.get('description'),
'series': dict_get(series, ('showTitle', 'displayTitle')), 'series': dict_get(series, ('showTitle', 'displayTitle')),
'season': dict_get(series, ('title', 'displaySubtitle')), 'season': dict_get(series, ('title', 'displaySubtitle')),
'thumbnail': traverse_obj( 'thumbnail': series.get('thumbnail'),
series, 'thumbnail', ('images', lambda _, v: v['name'] == 'seriesThumbnail', 'url'), get_all=False),
} }

View File

@@ -58,7 +58,7 @@ class AbcNewsVideoIE(AMPIE):
display_id = mobj.group('display_id') display_id = mobj.group('display_id')
video_id = mobj.group('id') video_id = mobj.group('id')
info_dict = self._extract_feed_info( info_dict = self._extract_feed_info(
f'http://abcnews.go.com/video/itemfeed?id={video_id}') 'http://abcnews.go.com/video/itemfeed?id=%s' % video_id)
info_dict.update({ info_dict.update({
'id': video_id, 'id': video_id,
'display_id': display_id, 'display_id': display_id,

View File

@@ -1,4 +1,5 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
dict_get, dict_get,
int_or_none, int_or_none,
@@ -56,11 +57,11 @@ class ABCOTVSIE(InfoExtractor):
data = self._download_json( data = self._download_json(
'https://api.abcotvs.com/v2/content', display_id, query={ 'https://api.abcotvs.com/v2/content', display_id, query={
'id': video_id, 'id': video_id,
'key': f'otv.web.{station}.story', 'key': 'otv.web.%s.story' % station,
'station': station, 'station': station,
})['data'] })['data']
video = try_get(data, lambda x: x['featuredMedia']['video'], dict) or data video = try_get(data, lambda x: x['featuredMedia']['video'], dict) or data
video_id = str(dict_get(video, ('id', 'publishedKey'), video_id)) video_id = compat_str(dict_get(video, ('id', 'publishedKey'), video_id))
title = video.get('title') or video['linkText'] title = video.get('title') or video['linkText']
formats = [] formats = []

View File

@@ -6,54 +6,53 @@ import hmac
import io import io
import json import json
import re import re
import struct
import time import time
import urllib.parse import urllib.parse
import urllib.request
import urllib.response
import uuid import uuid
from .common import InfoExtractor from .common import InfoExtractor
from ..aes import aes_ecb_decrypt from ..aes import aes_ecb_decrypt
from ..networking import RequestHandler, Response
from ..networking.exceptions import TransportError
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
OnDemandPagedList, bytes_to_intlist,
decode_base_n, decode_base_n,
int_or_none, int_or_none,
intlist_to_bytes,
OnDemandPagedList,
time_seconds, time_seconds,
traverse_obj, traverse_obj,
update_url,
update_url_query, update_url_query,
) )
class AbemaLicenseRH(RequestHandler): def add_opener(ydl, handler): # FIXME: Create proper API in .networking
_SUPPORTED_URL_SCHEMES = ('abematv-license',) """Add a handler for opening URLs, like _download_webpage"""
_SUPPORTED_PROXY_SCHEMES = None # https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L426
_SUPPORTED_FEATURES = None # https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L605
RH_NAME = 'abematv_license' rh = ydl._request_director.handlers['Urllib']
if 'abematv-license' in rh._SUPPORTED_URL_SCHEMES:
return
opener = rh._get_instance(cookiejar=ydl.cookiejar, proxies=ydl.proxies)
assert isinstance(opener, urllib.request.OpenerDirector)
opener.add_handler(handler)
rh._SUPPORTED_URL_SCHEMES = (*rh._SUPPORTED_URL_SCHEMES, 'abematv-license')
_STRTABLE = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
_HKEY = b'3AF0298C219469522A313570E8583005A642E73EDD58E3EA2FB7339D3DF1597E'
def __init__(self, *, ie: 'AbemaTVIE', **kwargs): class AbemaLicenseHandler(urllib.request.BaseHandler):
super().__init__(**kwargs) handler_order = 499
STRTABLE = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
HKEY = b'3AF0298C219469522A313570E8583005A642E73EDD58E3EA2FB7339D3DF1597E'
def __init__(self, ie: 'AbemaTVIE'):
# the protocol that this should really handle is 'abematv-license://'
# abematv_license_open is just a placeholder for development purposes
# ref. https://github.com/python/cpython/blob/f4c03484da59049eb62a9bf7777b963e2267d187/Lib/urllib/request.py#L510
setattr(self, 'abematv-license_open', getattr(self, 'abematv_license_open'))
self.ie = ie self.ie = ie
def _send(self, request):
url = request.url
ticket = urllib.parse.urlparse(url).netloc
try:
response_data = self._get_videokey_from_ticket(ticket)
except ExtractorError as e:
raise TransportError(cause=e.cause) from e
except (IndexError, KeyError, TypeError) as e:
raise TransportError(cause=repr(e)) from e
return Response(
io.BytesIO(response_data), url,
headers={'Content-Length': str(len(response_data))})
def _get_videokey_from_ticket(self, ticket): def _get_videokey_from_ticket(self, ticket):
to_show = self.ie.get_param('verbose', False) to_show = self.ie.get_param('verbose', False)
media_token = self.ie._get_media_token(to_show=to_show) media_token = self.ie._get_media_token(to_show=to_show)
@@ -63,27 +62,33 @@ class AbemaLicenseRH(RequestHandler):
query={'t': media_token}, query={'t': media_token},
data=json.dumps({ data=json.dumps({
'kv': 'a', 'kv': 'a',
'lt': ticket, 'lt': ticket
}).encode(), }).encode('utf-8'),
headers={ headers={
'Content-Type': 'application/json', 'Content-Type': 'application/json',
}) })
res = decode_base_n(license_response['k'], table=self._STRTABLE) res = decode_base_n(license_response['k'], table=self.STRTABLE)
encvideokey = list(res.to_bytes(16, 'big')) encvideokey = bytes_to_intlist(struct.pack('>QQ', res >> 64, res & 0xffffffffffffffff))
h = hmac.new( h = hmac.new(
binascii.unhexlify(self._HKEY), binascii.unhexlify(self.HKEY),
(license_response['cid'] + self.ie._DEVICE_ID).encode(), (license_response['cid'] + self.ie._DEVICE_ID).encode('utf-8'),
digestmod=hashlib.sha256) digestmod=hashlib.sha256)
enckey = list(h.digest()) enckey = bytes_to_intlist(h.digest())
return bytes(aes_ecb_decrypt(encvideokey, enckey)) return intlist_to_bytes(aes_ecb_decrypt(encvideokey, enckey))
def abematv_license_open(self, url):
url = url.get_full_url() if isinstance(url, urllib.request.Request) else url
ticket = urllib.parse.urlparse(url).netloc
response_data = self._get_videokey_from_ticket(ticket)
return urllib.response.addinfourl(io.BytesIO(response_data), headers={
'Content-Length': str(len(response_data)),
}, url=url, code=200)
class AbemaTVBaseIE(InfoExtractor): class AbemaTVBaseIE(InfoExtractor):
_NETRC_MACHINE = 'abematv'
_USERTOKEN = None _USERTOKEN = None
_DEVICE_ID = None _DEVICE_ID = None
_MEDIATOKEN = None _MEDIATOKEN = None
@@ -92,11 +97,11 @@ class AbemaTVBaseIE(InfoExtractor):
@classmethod @classmethod
def _generate_aks(cls, deviceid): def _generate_aks(cls, deviceid):
deviceid = deviceid.encode() deviceid = deviceid.encode('utf-8')
# add 1 hour and then drop minute and secs # add 1 hour and then drop minute and secs
ts_1hour = int((time_seconds() // 3600 + 1) * 3600) ts_1hour = int((time_seconds() // 3600 + 1) * 3600)
time_struct = time.gmtime(ts_1hour) time_struct = time.gmtime(ts_1hour)
ts_1hour_str = str(ts_1hour).encode() ts_1hour_str = str(ts_1hour).encode('utf-8')
tmp = None tmp = None
@@ -108,7 +113,7 @@ class AbemaTVBaseIE(InfoExtractor):
def mix_tmp(count): def mix_tmp(count):
nonlocal tmp nonlocal tmp
for _ in range(count): for i in range(count):
mix_once(tmp) mix_once(tmp)
def mix_twist(nonce): def mix_twist(nonce):
@@ -128,15 +133,11 @@ class AbemaTVBaseIE(InfoExtractor):
if self._USERTOKEN: if self._USERTOKEN:
return self._USERTOKEN return self._USERTOKEN
self._downloader._request_director.add_handler(AbemaLicenseRH(ie=self, logger=None))
username, _ = self._get_login_info() username, _ = self._get_login_info()
auth_cache = username and self.cache.load(self._NETRC_MACHINE, username, min_ver='2024.01.19') AbemaTVBaseIE._USERTOKEN = username and self.cache.load(self._NETRC_MACHINE, username)
AbemaTVBaseIE._USERTOKEN = auth_cache and auth_cache.get('usertoken')
if AbemaTVBaseIE._USERTOKEN: if AbemaTVBaseIE._USERTOKEN:
# try authentication with locally stored token # try authentication with locally stored token
try: try:
AbemaTVBaseIE._DEVICE_ID = auth_cache.get('device_id')
self._get_media_token(True) self._get_media_token(True)
return return
except ExtractorError as e: except ExtractorError as e:
@@ -149,12 +150,13 @@ class AbemaTVBaseIE(InfoExtractor):
data=json.dumps({ data=json.dumps({
'deviceId': self._DEVICE_ID, 'deviceId': self._DEVICE_ID,
'applicationKeySecret': aks, 'applicationKeySecret': aks,
}).encode(), }).encode('utf-8'),
headers={ headers={
'Content-Type': 'application/json', 'Content-Type': 'application/json',
}) })
AbemaTVBaseIE._USERTOKEN = user_data['token'] AbemaTVBaseIE._USERTOKEN = user_data['token']
add_opener(self._downloader, AbemaLicenseHandler(self))
return self._USERTOKEN return self._USERTOKEN
def _get_media_token(self, invalidate=False, to_show=True): def _get_media_token(self, invalidate=False, to_show=True):
@@ -169,44 +171,13 @@ class AbemaTVBaseIE(InfoExtractor):
'osLang': 'ja_JP', 'osLang': 'ja_JP',
'osTimezone': 'Asia/Tokyo', 'osTimezone': 'Asia/Tokyo',
'appId': 'tv.abema', 'appId': 'tv.abema',
'appVersion': '3.27.1', 'appVersion': '3.27.1'
}, headers={ }, headers={
'Authorization': f'bearer {self._get_device_token()}', 'Authorization': f'bearer {self._get_device_token()}',
})['token'] })['token']
return self._MEDIATOKEN return self._MEDIATOKEN
def _perform_login(self, username, password):
self._get_device_token()
if self.cache.load(self._NETRC_MACHINE, username, min_ver='2024.01.19') and self._get_media_token():
self.write_debug('Skipping logging in')
return
if '@' in username: # don't strictly check if it's email address or not
ep, method = 'user/email', 'email'
else:
ep, method = 'oneTimePassword', 'userId'
login_response = self._download_json(
f'https://api.abema.io/v1/auth/{ep}', None, note='Logging in',
data=json.dumps({
method: username,
'password': password,
}).encode(), headers={
'Authorization': f'bearer {self._get_device_token()}',
'Origin': 'https://abema.tv',
'Referer': 'https://abema.tv/',
'Content-Type': 'application/json',
})
AbemaTVBaseIE._USERTOKEN = login_response['token']
self._get_media_token(True)
auth_cache = {
'device_id': AbemaTVBaseIE._DEVICE_ID,
'usertoken': AbemaTVBaseIE._USERTOKEN,
}
self.cache.store(self._NETRC_MACHINE, username, auth_cache)
def _call_api(self, endpoint, video_id, query=None, note='Downloading JSON metadata'): def _call_api(self, endpoint, video_id, query=None, note='Downloading JSON metadata'):
return self._download_json( return self._download_json(
f'https://api.abema.io/{endpoint}', video_id, query=query or {}, f'https://api.abema.io/{endpoint}', video_id, query=query or {},
@@ -230,14 +201,14 @@ class AbemaTVBaseIE(InfoExtractor):
class AbemaTVIE(AbemaTVBaseIE): class AbemaTVIE(AbemaTVBaseIE):
_VALID_URL = r'https?://abema\.tv/(?P<type>now-on-air|video/episode|channels/.+?/slots)/(?P<id>[^?/]+)' _VALID_URL = r'https?://abema\.tv/(?P<type>now-on-air|video/episode|channels/.+?/slots)/(?P<id>[^?/]+)'
_NETRC_MACHINE = 'abematv'
_TESTS = [{ _TESTS = [{
'url': 'https://abema.tv/video/episode/194-25_s2_p1', 'url': 'https://abema.tv/video/episode/194-25_s2_p1',
'info_dict': { 'info_dict': {
'id': '194-25_s2_p1', 'id': '194-25_s2_p1',
'title': '第1話 「チーズケーキ」 「モーニング再び」', 'title': '第1話 「チーズケーキ」 「モーニング再び」',
'series': '異世界食堂2', 'series': '異世界食堂2',
'season': 'シーズン2', 'series_number': 2,
'season_number': 2,
'episode': '第1話 「チーズケーキ」 「モーニング再び」', 'episode': '第1話 「チーズケーキ」 「モーニング再び」',
'episode_number': 1, 'episode_number': 1,
}, },
@@ -249,7 +220,7 @@ class AbemaTVIE(AbemaTVBaseIE):
'title': 'ゆるキャン△ SEASON 全話一挙【無料ビデオ72時間】', 'title': 'ゆるキャン△ SEASON 全話一挙【無料ビデオ72時間】',
'series': 'ゆるキャン△ SEASON', 'series': 'ゆるキャン△ SEASON',
'episode': 'ゆるキャン△ SEASON 全話一挙【無料ビデオ72時間】', 'episode': 'ゆるキャン△ SEASON 全話一挙【無料ビデオ72時間】',
'season_number': 2, 'series_number': 2,
'episode_number': 1, 'episode_number': 1,
'description': 'md5:9c5a3172ae763278f9303922f0ea5b17', 'description': 'md5:9c5a3172ae763278f9303922f0ea5b17',
}, },
@@ -278,6 +249,33 @@ class AbemaTVIE(AbemaTVBaseIE):
}] }]
_TIMETABLE = None _TIMETABLE = None
def _perform_login(self, username, password):
self._get_device_token()
if self.cache.load(self._NETRC_MACHINE, username) and self._get_media_token():
self.write_debug('Skipping logging in')
return
if '@' in username: # don't strictly check if it's email address or not
ep, method = 'user/email', 'email'
else:
ep, method = 'oneTimePassword', 'userId'
login_response = self._download_json(
f'https://api.abema.io/v1/auth/{ep}', None, note='Logging in',
data=json.dumps({
method: username,
'password': password
}).encode('utf-8'), headers={
'Authorization': f'bearer {self._get_device_token()}',
'Origin': 'https://abema.tv',
'Referer': 'https://abema.tv/',
'Content-Type': 'application/json',
})
AbemaTVBaseIE._USERTOKEN = login_response['token']
self._get_media_token(True)
self.cache.store(self._NETRC_MACHINE, username, AbemaTVBaseIE._USERTOKEN)
def _real_extract(self, url): def _real_extract(self, url):
# starting download using infojson from this extractor is undefined behavior, # starting download using infojson from this extractor is undefined behavior,
# and never be fixed in the future; you must trigger downloads by directly specifying URL. # and never be fixed in the future; you must trigger downloads by directly specifying URL.
@@ -333,7 +331,7 @@ class AbemaTVIE(AbemaTVBaseIE):
description = self._html_search_regex( description = self._html_search_regex(
(r'<p\s+class="com-video-EpisodeDetailsBlock__content"><span\s+class=".+?">(.+?)</span></p><div', (r'<p\s+class="com-video-EpisodeDetailsBlock__content"><span\s+class=".+?">(.+?)</span></p><div',
r'<span\s+class=".+?SlotSummary.+?">(.+?)</span></div><div'), r'<span\s+class=".+?SlotSummary.+?">(.+?)</span></div><div',),
webpage, 'description', default=None, group=1) webpage, 'description', default=None, group=1)
if not description: if not description:
og_desc = self._html_search_meta( og_desc = self._html_search_meta(
@@ -346,18 +344,17 @@ class AbemaTVIE(AbemaTVBaseIE):
)? )?
''', r'\1', og_desc) ''', r'\1', og_desc)
# canonical URL may contain season and episode number # canonical URL may contain series and episode number
mobj = re.search(r's(\d+)_p(\d+)$', canonical_url) mobj = re.search(r's(\d+)_p(\d+)$', canonical_url)
if mobj: if mobj:
seri = int_or_none(mobj.group(1), default=float('inf')) seri = int_or_none(mobj.group(1), default=float('inf'))
epis = int_or_none(mobj.group(2), default=float('inf')) epis = int_or_none(mobj.group(2), default=float('inf'))
info['season_number'] = seri if seri < 100 else None info['series_number'] = seri if seri < 100 else None
# some anime like Detective Conan (though not available in AbemaTV) # some anime like Detective Conan (though not available in AbemaTV)
# has more than 1000 episodes (1026 as of 2021/11/15) # has more than 1000 episodes (1026 as of 2021/11/15)
info['episode_number'] = epis if epis < 2000 else None info['episode_number'] = epis if epis < 2000 else None
is_live, m3u8_url = False, None is_live, m3u8_url = False, None
availability = 'public'
if video_type == 'now-on-air': if video_type == 'now-on-air':
is_live = True is_live = True
channel_url = 'https://api.abema.io/v1/channels' channel_url = 'https://api.abema.io/v1/channels'
@@ -375,13 +372,13 @@ class AbemaTVIE(AbemaTVBaseIE):
f'https://api.abema.io/v1/video/programs/{video_id}', video_id, f'https://api.abema.io/v1/video/programs/{video_id}', video_id,
note='Checking playability', note='Checking playability',
headers=headers) headers=headers)
if not traverse_obj(api_response, ('label', 'free', {bool})): ondemand_types = traverse_obj(api_response, ('terms', ..., 'onDemandType'))
if 3 not in ondemand_types:
# cannot acquire decryption key for these streams # cannot acquire decryption key for these streams
self.report_warning('This is a premium-only stream') self.report_warning('This is a premium-only stream')
availability = 'premium_only'
info.update(traverse_obj(api_response, { info.update(traverse_obj(api_response, {
'series': ('series', 'title'), 'series': ('series', 'title'),
'season': ('season', 'name'), 'season': ('season', 'title'),
'season_number': ('season', 'sequence'), 'season_number': ('season', 'sequence'),
'episode_number': ('episode', 'number'), 'episode_number': ('episode', 'number'),
})) }))
@@ -398,7 +395,6 @@ class AbemaTVIE(AbemaTVBaseIE):
headers=headers) headers=headers)
if not traverse_obj(api_response, ('slot', 'flags', 'timeshiftFree'), default=False): if not traverse_obj(api_response, ('slot', 'flags', 'timeshiftFree'), default=False):
self.report_warning('This is a premium-only stream') self.report_warning('This is a premium-only stream')
availability = 'premium_only'
m3u8_url = f'https://vod-abematv.akamaized.net/slot/{video_id}/playlist.m3u8' m3u8_url = f'https://vod-abematv.akamaized.net/slot/{video_id}/playlist.m3u8'
else: else:
@@ -416,25 +412,19 @@ class AbemaTVIE(AbemaTVBaseIE):
'description': description, 'description': description,
'formats': formats, 'formats': formats,
'is_live': is_live, 'is_live': is_live,
'availability': availability,
}) })
if thumbnail := update_url(self._og_search_thumbnail(webpage, default=''), query=None):
info['thumbnails'] = [{'url': thumbnail}]
return info return info
class AbemaTVTitleIE(AbemaTVBaseIE): class AbemaTVTitleIE(AbemaTVBaseIE):
_VALID_URL = r'https?://abema\.tv/video/title/(?P<id>[^?/#]+)/?(?:\?(?:[^#]+&)?s=(?P<season>[^&#]+))?' _VALID_URL = r'https?://abema\.tv/video/title/(?P<id>[^?/]+)'
_PAGE_SIZE = 25 _PAGE_SIZE = 25
_TESTS = [{ _TESTS = [{
'url': 'https://abema.tv/video/title/90-1887', 'url': 'https://abema.tv/video/title/90-1597',
'info_dict': { 'info_dict': {
'id': '90-1887', 'id': '90-1597',
'title': 'シャッフルアイランド', 'title': 'シャッフルアイランド',
'description': 'md5:61b2425308f41a5282a926edda66f178',
}, },
'playlist_mincount': 2, 'playlist_mincount': 2,
}, { }, {
@@ -442,54 +432,41 @@ class AbemaTVTitleIE(AbemaTVBaseIE):
'info_dict': { 'info_dict': {
'id': '193-132', 'id': '193-132',
'title': '真心が届く~僕とスターのオフィス・ラブ!?~', 'title': '真心が届く~僕とスターのオフィス・ラブ!?~',
'description': 'md5:9b59493d1f3a792bafbc7319258e7af8',
}, },
'playlist_mincount': 16, 'playlist_mincount': 16,
}, { }, {
'url': 'https://abema.tv/video/title/25-1nzan-whrxe', 'url': 'https://abema.tv/video/title/25-102',
'info_dict': { 'info_dict': {
'id': '25-1nzan-whrxe', 'id': '25-102',
'title': 'ソードアート・オンライン', 'title': 'ソードアート・オンライン アリシゼーション',
'description': 'md5:c094904052322e6978495532bdbf06e6',
}, },
'playlist_mincount': 25, 'playlist_mincount': 24,
}, {
'url': 'https://abema.tv/video/title/26-2mzbynr-cph?s=26-2mzbynr-cph_s40',
'info_dict': {
'title': '〈物語〉シリーズ',
'id': '26-2mzbynr-cph',
'description': 'md5:e67873de1c88f360af1f0a4b84847a52',
},
'playlist_count': 59,
}] }]
def _fetch_page(self, playlist_id, series_version, season_id, page): def _fetch_page(self, playlist_id, series_version, page):
query = {
'seriesVersion': series_version,
'offset': str(page * self._PAGE_SIZE),
'order': 'seq',
'limit': str(self._PAGE_SIZE),
}
if season_id:
query['seasonId'] = season_id
programs = self._call_api( programs = self._call_api(
f'v1/video/series/{playlist_id}/programs', playlist_id, f'v1/video/series/{playlist_id}/programs', playlist_id,
note=f'Downloading page {page + 1}', note=f'Downloading page {page + 1}',
query=query) query={
'seriesVersion': series_version,
'offset': str(page * self._PAGE_SIZE),
'order': 'seq',
'limit': str(self._PAGE_SIZE),
})
yield from ( yield from (
self.url_result(f'https://abema.tv/video/episode/{x}') self.url_result(f'https://abema.tv/video/episode/{x}')
for x in traverse_obj(programs, ('programs', ..., 'id'))) for x in traverse_obj(programs, ('programs', ..., 'id')))
def _entries(self, playlist_id, series_version, season_id): def _entries(self, playlist_id, series_version):
return OnDemandPagedList( return OnDemandPagedList(
functools.partial(self._fetch_page, playlist_id, series_version, season_id), functools.partial(self._fetch_page, playlist_id, series_version),
self._PAGE_SIZE) self._PAGE_SIZE)
def _real_extract(self, url): def _real_extract(self, url):
playlist_id, season_id = self._match_valid_url(url).group('id', 'season') playlist_id = self._match_id(url)
series_info = self._call_api(f'v1/video/series/{playlist_id}', playlist_id) series_info = self._call_api(f'v1/video/series/{playlist_id}', playlist_id)
return self.playlist_result( return self.playlist_result(
self._entries(playlist_id, series_info['version'], season_id), playlist_id=playlist_id, self._entries(playlist_id, series_info['version']), playlist_id=playlist_id,
playlist_title=series_info.get('title'), playlist_title=series_info.get('title'),
playlist_description=series_info.get('content')) playlist_description=series_info.get('content'))

View File

@@ -4,7 +4,7 @@ from .common import InfoExtractor
class AcademicEarthCourseIE(InfoExtractor): class AcademicEarthCourseIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?academicearth\.org/playlists/(?P<id>[^?#/]+)' _VALID_URL = r'^https?://(?:www\.)?academicearth\.org/playlists/(?P<id>[^?#/]+)'
IE_NAME = 'AcademicEarth:Course' IE_NAME = 'AcademicEarth:Course'
_TEST = { _TEST = {
'url': 'http://academicearth.org/playlists/laws-of-nature/', 'url': 'http://academicearth.org/playlists/laws-of-nature/',

View File

@@ -43,14 +43,14 @@ class ACastIE(ACastBaseIE):
_VALID_URL = r'''(?x: _VALID_URL = r'''(?x:
https?:// https?://
(?: (?:
(?:(?:embed|www|shows)\.)?acast\.com/| (?:(?:embed|www)\.)?acast\.com/|
play\.acast\.com/s/ play\.acast\.com/s/
) )
(?P<channel>[^/?#]+)/(?:episodes/)?(?P<id>[^/#?"]+) (?P<channel>[^/]+)/(?P<id>[^/#?"]+)
)''' )'''
_EMBED_REGEX = [rf'(?x)<iframe[^>]+\bsrc=[\'"](?P<url>{_VALID_URL})'] _EMBED_REGEX = [rf'(?x)<iframe[^>]+\bsrc=[\'"](?P<url>{_VALID_URL})']
_TESTS = [{ _TESTS = [{
'url': 'https://shows.acast.com/sparpodcast/episodes/2.raggarmordet-rosterurdetforflutna', 'url': 'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
'info_dict': { 'info_dict': {
'id': '2a92b283-1a75-4ad8-8396-499c641de0d9', 'id': '2a92b283-1a75-4ad8-8396-499c641de0d9',
'ext': 'mp3', 'ext': 'mp3',
@@ -59,7 +59,7 @@ class ACastIE(ACastBaseIE):
'timestamp': 1477346700, 'timestamp': 1477346700,
'upload_date': '20161024', 'upload_date': '20161024',
'duration': 2766, 'duration': 2766,
'creators': ['Third Ear Studio'], 'creator': 'Third Ear Studio',
'series': 'Spår', 'series': 'Spår',
'episode': '2. Raggarmordet - Röster ur det förflutna', 'episode': '2. Raggarmordet - Röster ur det förflutna',
'thumbnail': 'https://assets.pippa.io/shows/616ebe1886d7b1398620b943/616ebe33c7e6e70013cae7da.jpg', 'thumbnail': 'https://assets.pippa.io/shows/616ebe1886d7b1398620b943/616ebe33c7e6e70013cae7da.jpg',
@@ -67,16 +67,13 @@ class ACastIE(ACastBaseIE):
'display_id': '2.raggarmordet-rosterurdetforflutna', 'display_id': '2.raggarmordet-rosterurdetforflutna',
'season_number': 4, 'season_number': 4,
'season': 'Season 4', 'season': 'Season 4',
}, }
}, { }, {
'url': 'http://embed.acast.com/adambuxton/ep.12-adam-joeschristmaspodcast2015', 'url': 'http://embed.acast.com/adambuxton/ep.12-adam-joeschristmaspodcast2015',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://play.acast.com/s/rattegangspodden/s04e09styckmordetihelenelund-del2-2', 'url': 'https://play.acast.com/s/rattegangspodden/s04e09styckmordetihelenelund-del2-2',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
'only_matching': True,
}, { }, {
'url': 'https://play.acast.com/s/sparpodcast/2a92b283-1a75-4ad8-8396-499c641de0d9', 'url': 'https://play.acast.com/s/sparpodcast/2a92b283-1a75-4ad8-8396-499c641de0d9',
'only_matching': True, 'only_matching': True,
@@ -96,13 +93,13 @@ class ACastIE(ACastBaseIE):
'series': 'Democracy Sausage with Mark Kenny', 'series': 'Democracy Sausage with Mark Kenny',
'timestamp': 1684826362, 'timestamp': 1684826362,
'description': 'md5:feabe1fc5004c78ee59c84a46bf4ba16', 'description': 'md5:feabe1fc5004c78ee59c84a46bf4ba16',
}, }
}] }]
def _real_extract(self, url): def _real_extract(self, url):
channel, display_id = self._match_valid_url(url).groups() channel, display_id = self._match_valid_url(url).groups()
episode = self._call_api( episode = self._call_api(
f'{channel}/episodes/{display_id}', '%s/episodes/%s' % (channel, display_id),
display_id, {'showInfo': 'true'}) display_id, {'showInfo': 'true'})
return self._extract_episode( return self._extract_episode(
episode, self._extract_show_info(episode.get('show') or {})) episode, self._extract_show_info(episode.get('show') or {}))
@@ -113,7 +110,7 @@ class ACastChannelIE(ACastBaseIE):
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?:// https?://
(?: (?:
(?:(?:www|shows)\.)?acast\.com/| (?:www\.)?acast\.com/|
play\.acast\.com/s/ play\.acast\.com/s/
) )
(?P<id>[^/#?]+) (?P<id>[^/#?]+)
@@ -123,20 +120,17 @@ class ACastChannelIE(ACastBaseIE):
'info_dict': { 'info_dict': {
'id': '4efc5294-5385-4847-98bd-519799ce5786', 'id': '4efc5294-5385-4847-98bd-519799ce5786',
'title': 'Today in Focus', 'title': 'Today in Focus',
'description': 'md5:feca253de9947634605080cd9eeea2bf', 'description': 'md5:c09ce28c91002ce4ffce71d6504abaae',
}, },
'playlist_mincount': 200, 'playlist_mincount': 200,
}, { }, {
'url': 'http://play.acast.com/s/ft-banking-weekly', 'url': 'http://play.acast.com/s/ft-banking-weekly',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://shows.acast.com/sparpodcast',
'only_matching': True,
}] }]
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
return False if ACastIE.suitable(url) else super().suitable(url) return False if ACastIE.suitable(url) else super(ACastChannelIE, cls).suitable(url)
def _real_extract(self, url): def _real_extract(self, url):
show_slug = self._match_id(url) show_slug = self._match_id(url)

View File

@@ -3,10 +3,9 @@ from ..utils import (
float_or_none, float_or_none,
format_field, format_field,
int_or_none, int_or_none,
traverse_obj,
parse_codecs, parse_codecs,
parse_qs, parse_qs,
str_or_none,
traverse_obj,
) )
@@ -25,7 +24,7 @@ class AcFunVideoBaseIE(InfoExtractor):
'width': int_or_none(video.get('width')), 'width': int_or_none(video.get('width')),
'height': int_or_none(video.get('height')), 'height': int_or_none(video.get('height')),
'tbr': float_or_none(video.get('avgBitrate')), 'tbr': float_or_none(video.get('avgBitrate')),
**parse_codecs(video.get('codecs', '')), **parse_codecs(video.get('codecs', ''))
}) })
return { return {
@@ -77,7 +76,7 @@ class AcFunVideoIE(AcFunVideoBaseIE):
'comment_count': int, 'comment_count': int,
'thumbnail': r're:^https?://.*\.(jpg|jpeg)', 'thumbnail': r're:^https?://.*\.(jpg|jpeg)',
'description': 'md5:67583aaf3a0f933bd606bc8a2d3ebb17', 'description': 'md5:67583aaf3a0f933bd606bc8a2d3ebb17',
}, }
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -130,7 +129,7 @@ class AcFunBangumiIE(AcFunVideoBaseIE):
'title': '红孩儿之趴趴蛙寻石记 第5话 ', 'title': '红孩儿之趴趴蛙寻石记 第5话 ',
'duration': 760.0, 'duration': 760.0,
'season': '红孩儿之趴趴蛙寻石记', 'season': '红孩儿之趴趴蛙寻石记',
'season_id': '5023171', 'season_id': 5023171,
'season_number': 1, # series has only 1 season 'season_number': 1, # series has only 1 season
'episode': 'Episode 5', 'episode': 'Episode 5',
'episode_number': 5, 'episode_number': 5,
@@ -147,7 +146,7 @@ class AcFunBangumiIE(AcFunVideoBaseIE):
'title': '叽歪老表(第二季) 第5话 坚不可摧', 'title': '叽歪老表(第二季) 第5话 坚不可摧',
'season': '叽歪老表(第二季)', 'season': '叽歪老表(第二季)',
'season_number': 2, 'season_number': 2,
'season_id': '6065485', 'season_id': 6065485,
'episode': '坚不可摧', 'episode': '坚不可摧',
'episode_number': 5, 'episode_number': 5,
'upload_date': '20220324', 'upload_date': '20220324',
@@ -192,7 +191,7 @@ class AcFunBangumiIE(AcFunVideoBaseIE):
'title': json_bangumi_data.get('showTitle'), 'title': json_bangumi_data.get('showTitle'),
'thumbnail': json_bangumi_data.get('image'), 'thumbnail': json_bangumi_data.get('image'),
'season': json_bangumi_data.get('bangumiTitle'), 'season': json_bangumi_data.get('bangumiTitle'),
'season_id': str_or_none(season_id), 'season_id': season_id,
'season_number': season_number, 'season_number': season_number,
'episode': json_bangumi_data.get('title'), 'episode': json_bangumi_data.get('title'),
'episode_number': episode_number, 'episode_number': episode_number,

View File

@@ -3,53 +3,33 @@ import binascii
import json import json
import os import os
import random import random
import time
from .common import InfoExtractor from .common import InfoExtractor
from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7 from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7
from ..compat import compat_b64decode
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError,
ass_subtitles_timecode, ass_subtitles_timecode,
bytes_to_intlist,
bytes_to_long, bytes_to_long,
ExtractorError,
float_or_none, float_or_none,
int_or_none, int_or_none,
join_nonempty, intlist_to_bytes,
long_to_bytes, long_to_bytes,
parse_iso8601,
pkcs1pad, pkcs1pad,
str_or_none,
strip_or_none, strip_or_none,
try_get, try_get,
unified_strdate, unified_strdate,
urlencode_postdata, urlencode_postdata,
) )
from ..utils.traversal import traverse_obj
class ADNBaseIE(InfoExtractor): class ADNIE(InfoExtractor):
IE_DESC = 'Animation Digital Network' IE_DESC = 'Animation Digital Network'
_NETRC_MACHINE = 'animationdigitalnetwork' _VALID_URL = r'https?://(?:www\.)?(?:animation|anime)digitalnetwork\.fr/video/[^/]+/(?P<id>\d+)'
_BASE = 'animationdigitalnetwork.fr'
_API_BASE_URL = f'https://gw.api.{_BASE}/'
_PLAYER_BASE_URL = f'{_API_BASE_URL}player/'
_HEADERS = {}
_LOGIN_ERR_MESSAGE = 'Unable to log in'
_RSA_KEY = (0x9B42B08905199A5CCE2026274399CA560ECB209EE9878A708B1C0812E1BB8CB5D1FB7441861147C1A1F2F3A0476DD63A9CAC20D3E983613346850AA6CB38F16DC7D720FD7D86FC6E5B3D5BBC72E14CD0BF9E869F2CEA2CCAD648F1DCE38F1FF916CEFB2D339B64AA0264372344BC775E265E8A852F88144AB0BD9AA06C1A4ABB, 65537)
_POS_ALIGN_MAP = {
'start': 1,
'end': 3,
}
_LINE_ALIGN_MAP = {
'middle': 8,
'end': 4,
}
class ADNIE(ADNBaseIE):
_VALID_URL = r'https?://(?:www\.)?animationdigitalnetwork\.com/(?:(?P<lang>de)/)?video/[^/?#]+/(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://animationdigitalnetwork.com/video/558-fruits-basket/9841-episode-1-a-ce-soir', 'url': 'https://animationdigitalnetwork.fr/video/fruits-basket/9841-episode-1-a-ce-soir',
'md5': '1c9ef066ceb302c86f80c2b371615261', 'md5': '1c9ef066ceb302c86f80c2b371615261',
'info_dict': { 'info_dict': {
'id': '9841', 'id': '9841',
@@ -64,32 +44,29 @@ class ADNIE(ADNBaseIE):
'season_number': 1, 'season_number': 1,
'episode': 'À ce soir !', 'episode': 'À ce soir !',
'episode_number': 1, 'episode_number': 1,
'thumbnail': str,
'season': 'Season 1',
}, },
'skip': 'Only available in French and German speaking Europe', 'skip': 'Only available in region (FR, ...)',
}, { }, {
'url': 'https://animationdigitalnetwork.com/de/video/973-the-eminence-in-shadow/23550-folge-1', 'url': 'http://animedigitalnetwork.fr/video/blue-exorcist-kyoto-saga/7778-episode-1-debut-des-hostilites',
'md5': '5c5651bf5791fa6fcd7906012b9d94e8', 'only_matching': True,
'info_dict': {
'id': '23550',
'ext': 'mp4',
'episode_number': 1,
'duration': 1417,
'release_date': '20231004',
'series': 'The Eminence in Shadow',
'season_number': 2,
'episode': str,
'title': str,
'thumbnail': str,
'season': 'Season 2',
'comment_count': int,
'average_rating': float,
'description': str,
},
# 'skip': 'Only available in French and German speaking Europe',
}] }]
_NETRC_MACHINE = 'animationdigitalnetwork'
_BASE = 'animationdigitalnetwork.fr'
_API_BASE_URL = 'https://gw.api.' + _BASE + '/'
_PLAYER_BASE_URL = _API_BASE_URL + 'player/'
_HEADERS = {}
_LOGIN_ERR_MESSAGE = 'Unable to log in'
_RSA_KEY = (0x9B42B08905199A5CCE2026274399CA560ECB209EE9878A708B1C0812E1BB8CB5D1FB7441861147C1A1F2F3A0476DD63A9CAC20D3E983613346850AA6CB38F16DC7D720FD7D86FC6E5B3D5BBC72E14CD0BF9E869F2CEA2CCAD648F1DCE38F1FF916CEFB2D339B64AA0264372344BC775E265E8A852F88144AB0BD9AA06C1A4ABB, 65537)
_POS_ALIGN_MAP = {
'start': 1,
'end': 3,
}
_LINE_ALIGN_MAP = {
'middle': 8,
'end': 4,
}
def _get_subtitles(self, sub_url, video_id): def _get_subtitles(self, sub_url, video_id):
if not sub_url: if not sub_url:
return None return None
@@ -106,9 +83,9 @@ class ADNIE(ADNBaseIE):
# http://animationdigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js # http://animationdigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js
dec_subtitles = unpad_pkcs7(aes_cbc_decrypt_bytes( dec_subtitles = unpad_pkcs7(aes_cbc_decrypt_bytes(
base64.b64decode(enc_subtitles[24:]), compat_b64decode(enc_subtitles[24:]),
binascii.unhexlify(self._K + '7fac1178830cfe0c'), binascii.unhexlify(self._K + '7fac1178830cfe0c'),
base64.b64decode(enc_subtitles[:24]))) compat_b64decode(enc_subtitles[:24])))
subtitles_json = self._parse_json(dec_subtitles.decode(), None, fatal=False) subtitles_json = self._parse_json(dec_subtitles.decode(), None, fatal=False)
if not subtitles_json: if not subtitles_json:
return None return None
@@ -131,7 +108,7 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
if start is None or end is None or text is None: if start is None or end is None or text is None:
continue continue
alignment = self._POS_ALIGN_MAP.get(position_align, 2) + self._LINE_ALIGN_MAP.get(line_align, 0) alignment = self._POS_ALIGN_MAP.get(position_align, 2) + self._LINE_ALIGN_MAP.get(line_align, 0)
ssa += os.linesep + 'Dialogue: Marked=0,{},{},Default,,0,0,0,,{}{}'.format( ssa += os.linesep + 'Dialogue: Marked=0,%s,%s,Default,,0,0,0,,%s%s' % (
ass_subtitles_timecode(start), ass_subtitles_timecode(start),
ass_subtitles_timecode(end), ass_subtitles_timecode(end),
'{\\a%d}' % alignment if alignment != 2 else '', '{\\a%d}' % alignment if alignment != 2 else '',
@@ -139,8 +116,6 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
if sub_lang == 'vostf': if sub_lang == 'vostf':
sub_lang = 'fr' sub_lang = 'fr'
elif sub_lang == 'vostde':
sub_lang = 'de'
subtitles.setdefault(sub_lang, []).extend([{ subtitles.setdefault(sub_lang, []).extend([{
'ext': 'json', 'ext': 'json',
'data': json.dumps(sub), 'data': json.dumps(sub),
@@ -162,7 +137,7 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
'username': username, 'username': username,
})) or {}).get('accessToken') })) or {}).get('accessToken')
if access_token: if access_token:
self._HEADERS['Authorization'] = f'Bearer {access_token}' self._HEADERS = {'authorization': 'Bearer ' + access_token}
except ExtractorError as e: except ExtractorError as e:
message = None message = None
if isinstance(e.cause, HTTPError) and e.cause.status == 401: if isinstance(e.cause, HTTPError) and e.cause.status == 401:
@@ -172,9 +147,8 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
self.report_warning(message or self._LOGIN_ERR_MESSAGE) self.report_warning(message or self._LOGIN_ERR_MESSAGE)
def _real_extract(self, url): def _real_extract(self, url):
lang, video_id = self._match_valid_url(url).group('lang', 'id') video_id = self._match_id(url)
self._HEADERS['X-Target-Distribution'] = lang or 'fr' video_base_url = self._PLAYER_BASE_URL + 'video/%s/' % video_id
video_base_url = self._PLAYER_BASE_URL + f'video/{video_id}/'
player = self._download_json( player = self._download_json(
video_base_url + 'configuration', video_id, video_base_url + 'configuration', video_id,
'Downloading player config JSON metadata', 'Downloading player config JSON metadata',
@@ -183,29 +157,26 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
user = options['user'] user = options['user']
if not user.get('hasAccess'): if not user.get('hasAccess'):
start_date = traverse_obj(options, ('video', 'startDate', {str})) self.raise_login_required()
if (parse_iso8601(start_date) or 0) > time.time():
raise ExtractorError(f'This video is not available yet. Release date: {start_date}', expected=True)
self.raise_login_required('This video requires a subscription', method='password')
token = self._download_json( token = self._download_json(
user.get('refreshTokenUrl') or (self._PLAYER_BASE_URL + 'refresh/token'), user.get('refreshTokenUrl') or (self._PLAYER_BASE_URL + 'refresh/token'),
video_id, 'Downloading access token', headers={ video_id, 'Downloading access token', headers={
'X-Player-Refresh-Token': user['refreshToken'], 'x-player-refresh-token': user['refreshToken']
}, data=b'')['token'] }, data=b'')['token']
links_url = try_get(options, lambda x: x['video']['url']) or (video_base_url + 'link') links_url = try_get(options, lambda x: x['video']['url']) or (video_base_url + 'link')
self._K = ''.join(random.choices('0123456789abcdef', k=16)) self._K = ''.join(random.choices('0123456789abcdef', k=16))
message = list(json.dumps({ message = bytes_to_intlist(json.dumps({
'k': self._K, 'k': self._K,
't': token, 't': token,
}).encode()) }))
# Sometimes authentication fails for no good reason, retry with # Sometimes authentication fails for no good reason, retry with
# a different random padding # a different random padding
links_data = None links_data = None
for _ in range(3): for _ in range(3):
padded_message = bytes(pkcs1pad(message, 128)) padded_message = intlist_to_bytes(pkcs1pad(message, 128))
n, e = self._RSA_KEY n, e = self._RSA_KEY
encrypted_message = long_to_bytes(pow(bytes_to_long(padded_message), e, n)) encrypted_message = long_to_bytes(pow(bytes_to_long(padded_message), e, n))
authorization = base64.b64encode(encrypted_message).decode() authorization = base64.b64encode(encrypted_message).decode()
@@ -213,13 +184,12 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
try: try:
links_data = self._download_json( links_data = self._download_json(
links_url, video_id, 'Downloading links JSON metadata', headers={ links_url, video_id, 'Downloading links JSON metadata', headers={
'X-Player-Token': authorization, 'X-Player-Token': authorization
**self._HEADERS,
}, query={ }, query={
'freeWithAds': 'true', 'freeWithAds': 'true',
'adaptive': 'false', 'adaptive': 'false',
'withMetadata': 'true', 'withMetadata': 'true',
'source': 'Web', 'source': 'Web'
}) })
break break
except ExtractorError as e: except ExtractorError as e:
@@ -232,7 +202,7 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
error = self._parse_json(e.cause.response.read(), video_id) error = self._parse_json(e.cause.response.read(), video_id)
message = error.get('message') message = error.get('message')
if e.cause.status == 403 and error.get('code') == 'player-bad-geolocation-country': if e.cause.code == 403 and error.get('code') == 'player-bad-geolocation-country':
self.raise_geo_restricted(msg=message) self.raise_geo_restricted(msg=message)
raise ExtractorError(message) raise ExtractorError(message)
else: else:
@@ -251,8 +221,7 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
for quality, load_balancer_url in qualities.items(): for quality, load_balancer_url in qualities.items():
load_balancer_data = self._download_json( load_balancer_data = self._download_json(
load_balancer_url, video_id, load_balancer_url, video_id,
f'Downloading {format_id} {quality} JSON metadata', 'Downloading %s %s JSON metadata' % (format_id, quality),
headers=self._HEADERS,
fatal=False) or {} fatal=False) or {}
m3u8_url = load_balancer_data.get('location') m3u8_url = load_balancer_data.get('location')
if not m3u8_url: if not m3u8_url:
@@ -263,17 +232,11 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
if format_id == 'vf': if format_id == 'vf':
for f in m3u8_formats: for f in m3u8_formats:
f['language'] = 'fr' f['language'] = 'fr'
elif format_id == 'vde':
for f in m3u8_formats:
f['language'] = 'de'
formats.extend(m3u8_formats) formats.extend(m3u8_formats)
if not formats:
self.raise_login_required('This video requires a subscription', method='password')
video = (self._download_json( video = (self._download_json(
self._API_BASE_URL + f'video/{video_id}', video_id, self._API_BASE_URL + 'video/%s' % video_id, video_id,
'Downloading additional video metadata', fatal=False, headers=self._HEADERS) or {}).get('video') or {} 'Downloading additional video metadata', fatal=False) or {}).get('video') or {}
show = video.get('show') or {} show = video.get('show') or {}
return { return {
@@ -292,38 +255,3 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
'average_rating': float_or_none(video.get('rating') or metas.get('rating')), 'average_rating': float_or_none(video.get('rating') or metas.get('rating')),
'comment_count': int_or_none(video.get('commentsCount')), 'comment_count': int_or_none(video.get('commentsCount')),
} }
class ADNSeasonIE(ADNBaseIE):
_VALID_URL = r'https?://(?:www\.)?animationdigitalnetwork\.com/(?:(?P<lang>de)/)?video/(?P<id>\d+)[^/?#]*/?(?:$|[#?])'
_TESTS = [{
'url': 'https://animationdigitalnetwork.com/video/911-tokyo-mew-mew-new',
'playlist_count': 12,
'info_dict': {
'id': '911',
'title': 'Tokyo Mew Mew New',
},
# 'skip': 'Only available in French end German speaking Europe',
}]
def _real_extract(self, url):
lang, video_show_slug = self._match_valid_url(url).group('lang', 'id')
self._HEADERS['X-Target-Distribution'] = lang or 'fr'
show = self._download_json(
f'{self._API_BASE_URL}show/{video_show_slug}/', video_show_slug,
'Downloading show JSON metadata', headers=self._HEADERS)['show']
show_id = str(show['id'])
episodes = self._download_json(
f'{self._API_BASE_URL}video/show/{show_id}', video_show_slug,
'Downloading episode list', headers=self._HEADERS, query={
'order': 'asc',
'limit': '-1',
})
def entries():
for episode_id in traverse_obj(episodes, ('videos', ..., 'id', {str_or_none})):
yield self.url_result(join_nonempty(
'https://animationdigitalnetwork.com', lang, 'video',
video_show_slug, episode_id, delim='/'), ADNIE, episode_id)
return self.playlist_result(entries(), show_id, show.get('title'))

View File

@@ -1,6 +1,8 @@
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import (
compat_parse_qs,
compat_urlparse,
)
class AdobeConnectIE(InfoExtractor): class AdobeConnectIE(InfoExtractor):
@@ -10,13 +12,13 @@ class AdobeConnectIE(InfoExtractor):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
title = self._html_extract_title(webpage) title = self._html_extract_title(webpage)
qs = urllib.parse.parse_qs(self._search_regex(r"swfUrl\s*=\s*'([^']+)'", webpage, 'swf url').split('?')[1]) qs = compat_parse_qs(self._search_regex(r"swfUrl\s*=\s*'([^']+)'", webpage, 'swf url').split('?')[1])
is_live = qs.get('isLive', ['false'])[0] == 'true' is_live = qs.get('isLive', ['false'])[0] == 'true'
formats = [] formats = []
for con_string in qs['conStrings'][0].split(','): for con_string in qs['conStrings'][0].split(','):
formats.append({ formats.append({
'format_id': con_string.split('://')[0], 'format_id': con_string.split('://')[0],
'app': urllib.parse.quote('?' + con_string.split('?')[1] + 'flvplayerapp/' + qs['appInstance'][0]), 'app': compat_urlparse.quote('?' + con_string.split('?')[1] + 'flvplayerapp/' + qs['appInstance'][0]),
'ext': 'flv', 'ext': 'flv',
'play_path': 'mp4:' + qs['streamName'][0], 'play_path': 'mp4:' + qs['streamName'][0],
'rtmp_conn': 'S:' + qs['ticket'][0], 'rtmp_conn': 'S:' + qs['ticket'][0],

File diff suppressed because it is too large Load Diff

View File

@@ -2,12 +2,13 @@ import functools
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
ISO639Utils,
OnDemandPagedList,
float_or_none, float_or_none,
int_or_none, int_or_none,
ISO639Utils,
join_nonempty, join_nonempty,
OnDemandPagedList,
parse_duration, parse_duration,
str_or_none, str_or_none,
str_to_int, str_to_int,
@@ -35,7 +36,7 @@ class AdobeTVBaseIE(InfoExtractor):
return subtitles return subtitles
def _parse_video_data(self, video_data): def _parse_video_data(self, video_data):
video_id = str(video_data['id']) video_id = compat_str(video_data['id'])
title = video_data['title'] title = video_data['title']
s3_extracted = False s3_extracted = False
@@ -150,7 +151,7 @@ class AdobeTVPlaylistBaseIE(AdobeTVBaseIE):
page += 1 page += 1
query['page'] = page query['page'] = page
for element_data in self._call_api( for element_data in self._call_api(
self._RESOURCE, display_id, query, f'Download Page {page}'): self._RESOURCE, display_id, query, 'Download Page %d' % page):
yield self._process_data(element_data) yield self._process_data(element_data)
def _extract_playlist_entries(self, display_id, query): def _extract_playlist_entries(self, display_id, query):

View File

@@ -91,7 +91,7 @@ class AdultSwimIE(TurnerBaseIE):
getShowBySlug(slug:"%s") { getShowBySlug(slug:"%s") {
%%s %%s
} }
}''' % show_path # noqa: UP031 }''' % show_path
if episode_path: if episode_path:
query = query % '''title query = query % '''title
getVideoBySlug(slug:"%s") { getVideoBySlug(slug:"%s") {
@@ -107,6 +107,7 @@ class AdultSwimIE(TurnerBaseIE):
title title
tvRating tvRating
}''' % episode_path }''' % episode_path
['getVideoBySlug']
else: else:
query = query % '''metaDescription query = query % '''metaDescription
title title
@@ -128,7 +129,7 @@ class AdultSwimIE(TurnerBaseIE):
episode_title = title = video_data['title'] episode_title = title = video_data['title']
series = show_data.get('title') series = show_data.get('title')
if series: if series:
title = f'{series} - {title}' title = '%s - %s' % (series, title)
info = { info = {
'id': video_id, 'id': video_id,
'title': title, 'title': title,
@@ -191,7 +192,7 @@ class AdultSwimIE(TurnerBaseIE):
if not slug: if not slug:
continue continue
entries.append(self.url_result( entries.append(self.url_result(
f'http://adultswim.com/videos/{show_path}/{slug}', 'http://adultswim.com/videos/%s/%s' % (show_path, slug),
'AdultSwim', video.get('_id'))) 'AdultSwim', video.get('_id')))
return self.playlist_result( return self.playlist_result(
entries, show_path, show_data.get('title'), entries, show_path, show_data.get('title'),

View File

@@ -73,8 +73,8 @@ class AENetworksBaseIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
def _extract_aetn_info(self, domain, filter_key, filter_value, url): def _extract_aetn_info(self, domain, filter_key, filter_value, url):
requestor_id, brand = self._DOMAIN_MAP[domain] requestor_id, brand = self._DOMAIN_MAP[domain]
result = self._download_json( result = self._download_json(
f'https://feeds.video.aetnd.com/api/v2/{brand}/videos', 'https://feeds.video.aetnd.com/api/v2/%s/videos' % brand,
filter_value, query={f'filter[{filter_key}]': filter_value}) filter_value, query={'filter[%s]' % filter_key: filter_value})
result = traverse_obj( result = traverse_obj(
result, ('results', result, ('results',
lambda k, v: k == 0 and v[filter_key] == filter_value), lambda k, v: k == 0 and v[filter_key] == filter_value),
@@ -93,7 +93,7 @@ class AENetworksBaseIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
resource = self._get_mvpd_resource( resource = self._get_mvpd_resource(
requestor_id, theplatform_metadata['title'], requestor_id, theplatform_metadata['title'],
theplatform_metadata.get('AETN$PPL_pplProgramId') or theplatform_metadata.get('AETN$PPL_pplProgramId_OLD'), theplatform_metadata.get('AETN$PPL_pplProgramId') or theplatform_metadata.get('AETN$PPL_pplProgramId_OLD'),
traverse_obj(theplatform_metadata, ('ratings', 0, 'rating'))) theplatform_metadata['ratings'][0]['rating'])
auth = self._extract_mvpd_auth( auth = self._extract_mvpd_auth(
url, video_id, requestor_id, resource) url, video_id, requestor_id, resource)
info.update(self._extract_aen_smil(media_url, video_id, auth)) info.update(self._extract_aen_smil(media_url, video_id, auth))
@@ -121,28 +121,18 @@ class AENetworksIE(AENetworksBaseIE):
'info_dict': { 'info_dict': {
'id': '22253814', 'id': '22253814',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Winter Is Coming', 'title': 'Winter is Coming',
'description': 'md5:a40e370925074260b1c8a633c632c63a', 'description': 'md5:641f424b7a19d8e24f26dea22cf59d74',
'timestamp': 1338306241, 'timestamp': 1338306241,
'upload_date': '20120529', 'upload_date': '20120529',
'uploader': 'AENE-NEW', 'uploader': 'AENE-NEW',
'duration': 2592.0,
'thumbnail': r're:^https?://.*\.jpe?g$',
'chapters': 'count:5',
'tags': 'count:14',
'categories': ['Mountain Men'],
'episode_number': 1,
'episode': 'Episode 1',
'season': 'Season 1',
'season_number': 1,
'series': 'Mountain Men',
}, },
'params': { 'params': {
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
}, },
'add_ie': ['ThePlatform'], 'add_ie': ['ThePlatform'],
'skip': 'Geo-restricted - This content is not available in your location.', 'skip': 'Geo-restricted - This content is not available in your location.'
}, { }, {
'url': 'http://www.aetv.com/shows/duck-dynasty/season-9/episode-1', 'url': 'http://www.aetv.com/shows/duck-dynasty/season-9/episode-1',
'info_dict': { 'info_dict': {
@@ -153,15 +143,6 @@ class AENetworksIE(AENetworksBaseIE):
'timestamp': 1452634428, 'timestamp': 1452634428,
'upload_date': '20160112', 'upload_date': '20160112',
'uploader': 'AENE-NEW', 'uploader': 'AENE-NEW',
'duration': 1277.695,
'thumbnail': r're:^https?://.*\.jpe?g$',
'chapters': 'count:4',
'tags': 'count:23',
'episode': 'Episode 1',
'episode_number': 1,
'season': 'Season 9',
'season_number': 9,
'series': 'Duck Dynasty',
}, },
'params': { 'params': {
# m3u8 download # m3u8 download
@@ -171,28 +152,28 @@ class AENetworksIE(AENetworksBaseIE):
'skip': 'This video is only available for users of participating TV providers.', 'skip': 'This video is only available for users of participating TV providers.',
}, { }, {
'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8', 'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'http://www.mylifetime.com/shows/project-runway-junior/season-1/episode-6', 'url': 'http://www.mylifetime.com/shows/project-runway-junior/season-1/episode-6',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'http://www.mylifetime.com/movies/center-stage-on-pointe/full-movie', 'url': 'http://www.mylifetime.com/movies/center-stage-on-pointe/full-movie',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'https://watch.lifetimemovieclub.com/movies/10-year-reunion/full-movie', 'url': 'https://watch.lifetimemovieclub.com/movies/10-year-reunion/full-movie',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'http://www.history.com/specials/sniper-into-the-kill-zone/full-special', 'url': 'http://www.history.com/specials/sniper-into-the-kill-zone/full-special',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'https://www.aetv.com/specials/hunting-jonbenets-killer-the-untold-story/preview-hunting-jonbenets-killer-the-untold-story', 'url': 'https://www.aetv.com/specials/hunting-jonbenets-killer-the-untold-story/preview-hunting-jonbenets-killer-the-untold-story',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'http://www.history.com/videos/history-of-valentines-day', 'url': 'http://www.history.com/videos/history-of-valentines-day',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'https://play.aetv.com/shows/duck-dynasty/videos/best-of-duck-dynasty-getting-quack-in-shape', 'url': 'https://play.aetv.com/shows/duck-dynasty/videos/best-of-duck-dynasty-getting-quack-in-shape',
'only_matching': True, 'only_matching': True
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -209,14 +190,14 @@ class AENetworksListBaseIE(AENetworksBaseIE):
%s(slug: "%s") { %s(slug: "%s") {
%s %s
} }
}''' % (resource, slug, fields), # noqa: UP031 }''' % (resource, slug, fields),
}))['data'][resource] }))['data'][resource]
def _real_extract(self, url): def _real_extract(self, url):
domain, slug = self._match_valid_url(url).groups() domain, slug = self._match_valid_url(url).groups()
_, brand = self._DOMAIN_MAP[domain] _, brand = self._DOMAIN_MAP[domain]
playlist = self._call_api(self._RESOURCE, slug, brand, self._FIELDS) playlist = self._call_api(self._RESOURCE, slug, brand, self._FIELDS)
base_url = f'http://watch.{domain}' base_url = 'http://watch.%s' % domain
entries = [] entries = []
for item in (playlist.get(self._ITEMS_KEY) or []): for item in (playlist.get(self._ITEMS_KEY) or []):
@@ -248,10 +229,10 @@ class AENetworksCollectionIE(AENetworksListBaseIE):
'playlist_mincount': 12, 'playlist_mincount': 12,
}, { }, {
'url': 'https://watch.historyvault.com/shows/america-the-story-of-us-2/season-1/list/america-the-story-of-us', 'url': 'https://watch.historyvault.com/shows/america-the-story-of-us-2/season-1/list/america-the-story-of-us',
'only_matching': True, 'only_matching': True
}, { }, {
'url': 'https://www.historyvault.com/collections/mysteryquest', 'url': 'https://www.historyvault.com/collections/mysteryquest',
'only_matching': True, 'only_matching': True
}] }]
_RESOURCE = 'list' _RESOURCE = 'list'
_ITEMS_KEY = 'items' _ITEMS_KEY = 'items'
@@ -309,7 +290,7 @@ class HistoryTopicIE(AENetworksBaseIE):
'info_dict': { 'info_dict': {
'id': '40700995724', 'id': '40700995724',
'ext': 'mp4', 'ext': 'mp4',
'title': 'History of Valentines Day', 'title': "History of Valentines Day",
'description': 'md5:7b57ea4829b391995b405fa60bd7b5f7', 'description': 'md5:7b57ea4829b391995b405fa60bd7b5f7',
'timestamp': 1375819729, 'timestamp': 1375819729,
'upload_date': '20130806', 'upload_date': '20130806',
@@ -357,13 +338,12 @@ class BiographyIE(AENetworksBaseIE):
'skip_download': True, 'skip_download': True,
}, },
'add_ie': ['ThePlatform'], 'add_ie': ['ThePlatform'],
'skip': '404 Not Found',
}] }]
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
player_url = self._search_regex( player_url = self._search_regex(
rf'<phoenix-iframe[^>]+src="({HistoryPlayerIE._VALID_URL})', r'<phoenix-iframe[^>]+src="(%s)' % HistoryPlayerIE._VALID_URL,
webpage, 'player URL') webpage, 'player URL')
return self.url_result(player_url, HistoryPlayerIE.ie_key()) return self.url_result(player_url, HistoryPlayerIE.ie_key())

View File

@@ -16,8 +16,8 @@ class AeonCoIE(InfoExtractor):
'uploader': 'Semiconductor', 'uploader': 'Semiconductor',
'uploader_id': 'semiconductor', 'uploader_id': 'semiconductor',
'uploader_url': 'https://vimeo.com/semiconductor', 'uploader_url': 'https://vimeo.com/semiconductor',
'duration': 348, 'duration': 348
}, }
}, { }, {
'url': 'https://aeon.co/videos/dazzling-timelapse-shows-how-microbes-spoil-our-food-and-sometimes-enrich-it', 'url': 'https://aeon.co/videos/dazzling-timelapse-shows-how-microbes-spoil-our-food-and-sometimes-enrich-it',
'md5': '03582d795382e49f2fd0b427b55de409', 'md5': '03582d795382e49f2fd0b427b55de409',
@@ -29,8 +29,8 @@ class AeonCoIE(InfoExtractor):
'uploader': 'Aeon Video', 'uploader': 'Aeon Video',
'uploader_id': 'aeonvideo', 'uploader_id': 'aeonvideo',
'uploader_url': 'https://vimeo.com/aeonvideo', 'uploader_url': 'https://vimeo.com/aeonvideo',
'duration': 1344, 'duration': 1344
}, }
}, { }, {
'url': 'https://aeon.co/videos/chew-over-the-prisoners-dilemma-and-see-if-you-can-find-the-rational-path-out', 'url': 'https://aeon.co/videos/chew-over-the-prisoners-dilemma-and-see-if-you-can-find-the-rational-path-out',
'md5': '1cfda0bf3ae24df17d00f2c0cb6cc21b', 'md5': '1cfda0bf3ae24df17d00f2c0cb6cc21b',

View File

@@ -1,26 +1,142 @@
import datetime as dt
import functools import functools
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..networking import Request
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
OnDemandPagedList, OnDemandPagedList,
UserNotLive, date_from_str,
determine_ext, determine_ext,
filter_dict,
int_or_none, int_or_none,
orderedSet, qualities,
parse_iso8601, traverse_obj,
unified_strdate,
unified_timestamp,
update_url_query,
url_or_none, url_or_none,
urlencode_postdata, urlencode_postdata,
urljoin, xpath_text,
) )
from ..utils.traversal import traverse_obj
class AfreecaTVBaseIE(InfoExtractor): class AfreecaTVIE(InfoExtractor):
IE_NAME = 'afreecatv'
IE_DESC = 'afreecatv.com'
_VALID_URL = r'''(?x)
https?://
(?:
(?:(?:live|afbbs|www)\.)?afreeca(?:tv)?\.com(?::\d+)?
(?:
/app/(?:index|read_ucc_bbs)\.cgi|
/player/[Pp]layer\.(?:swf|html)
)\?.*?\bnTitleNo=|
vod\.afreecatv\.com/(PLAYER/STATION|player)/
)
(?P<id>\d+)
'''
_NETRC_MACHINE = 'afreecatv' _NETRC_MACHINE = 'afreecatv'
_TESTS = [{
'url': 'http://live.afreecatv.com:8079/app/index.cgi?szType=read_ucc_bbs&szBjId=dailyapril&nStationNo=16711924&nBbsNo=18605867&nTitleNo=36164052&szSkin=',
'md5': 'f72c89fe7ecc14c1b5ce506c4996046e',
'info_dict': {
'id': '36164052',
'ext': 'mp4',
'title': '데일리 에이프릴 요정들의 시상식!',
'thumbnail': 're:^https?://(?:video|st)img.afreecatv.com/.*$',
'uploader': 'dailyapril',
'uploader_id': 'dailyapril',
'upload_date': '20160503',
},
'skip': 'Video is gone',
}, {
'url': 'http://afbbs.afreecatv.com:8080/app/read_ucc_bbs.cgi?nStationNo=16711924&nTitleNo=36153164&szBjId=dailyapril&nBbsNo=18605867',
'info_dict': {
'id': '36153164',
'title': "BJ유트루와 함께하는 '팅커벨 메이크업!'",
'thumbnail': 're:^https?://(?:video|st)img.afreecatv.com/.*$',
'uploader': 'dailyapril',
'uploader_id': 'dailyapril',
},
'playlist_count': 2,
'playlist': [{
'md5': 'd8b7c174568da61d774ef0203159bf97',
'info_dict': {
'id': '36153164_1',
'ext': 'mp4',
'title': "BJ유트루와 함께하는 '팅커벨 메이크업!'",
'upload_date': '20160502',
},
}, {
'md5': '58f2ce7f6044e34439ab2d50612ab02b',
'info_dict': {
'id': '36153164_2',
'ext': 'mp4',
'title': "BJ유트루와 함께하는 '팅커벨 메이크업!'",
'upload_date': '20160502',
},
}],
'skip': 'Video is gone',
}, {
# non standard key
'url': 'http://vod.afreecatv.com/PLAYER/STATION/20515605',
'info_dict': {
'id': '20170411_BE689A0E_190960999_1_2_h',
'ext': 'mp4',
'title': '혼자사는여자집',
'thumbnail': 're:^https?://(?:video|st)img.afreecatv.com/.*$',
'uploader': '♥이슬이',
'uploader_id': 'dasl8121',
'upload_date': '20170411',
'duration': 213,
},
'params': {
'skip_download': True,
},
}, {
# adult content
'url': 'https://vod.afreecatv.com/player/97267690',
'info_dict': {
'id': '20180327_27901457_202289533_1',
'ext': 'mp4',
'title': '[생]빨개요♥ (part 1)',
'thumbnail': 're:^https?://(?:video|st)img.afreecatv.com/.*$',
'uploader': '[SA]서아',
'uploader_id': 'bjdyrksu',
'upload_date': '20180327',
'duration': 3601,
},
'params': {
'skip_download': True,
},
'skip': 'The VOD does not exist',
}, {
'url': 'http://www.afreecatv.com/player/Player.swf?szType=szBjId=djleegoon&nStationNo=11273158&nBbsNo=13161095&nTitleNo=36327652',
'only_matching': True,
}, {
'url': 'https://vod.afreecatv.com/player/96753363',
'info_dict': {
'id': '20230108_9FF5BEE1_244432674_1',
'ext': 'mp4',
'uploader_id': 'rlantnghks',
'uploader': '페이즈으',
'duration': 10840,
'thumbnail': 'http://videoimg.afreecatv.com/php/SnapshotLoad.php?rowKey=20230108_9FF5BEE1_244432674_1_r',
'upload_date': '20230108',
'title': '젠지 페이즈',
},
'params': {
'skip_download': True,
},
}]
@staticmethod
def parse_video_key(key):
video_key = {}
m = re.match(r'^(?P<upload_date>\d{8})_\w+_(?P<part>\d+)$', key)
if m:
video_key['upload_date'] = m.group('upload_date')
video_key['part'] = int(m.group('part'))
return video_key
def _perform_login(self, username, password): def _perform_login(self, username, password):
login_form = { login_form = {
@@ -34,21 +150,21 @@ class AfreecaTVBaseIE(InfoExtractor):
} }
response = self._download_json( response = self._download_json(
'https://login.sooplive.co.kr/app/LoginAction.php', None, 'https://login.afreecatv.com/app/LoginAction.php', None,
'Logging in', data=urlencode_postdata(login_form)) 'Logging in', data=urlencode_postdata(login_form))
_ERRORS = { _ERRORS = {
-4: 'Your account has been suspended due to a violation of our terms and policies.', -4: 'Your account has been suspended due to a violation of our terms and policies.',
-5: 'https://member.sooplive.co.kr/app/user_delete_progress.php', -5: 'https://member.afreecatv.com/app/user_delete_progress.php',
-6: 'https://login.sooplive.co.kr/membership/changeMember.php', -6: 'https://login.afreecatv.com/membership/changeMember.php',
-8: "Hello! Soop here.\nThe username you have entered belongs to \n an account that requires a legal guardian's consent. \nIf you wish to use our services without restriction, \nplease make sure to go through the necessary verification process.", -8: "Hello! AfreecaTV here.\nThe username you have entered belongs to \n an account that requires a legal guardian's consent. \nIf you wish to use our services without restriction, \nplease make sure to go through the necessary verification process.",
-9: 'https://member.sooplive.co.kr/app/pop_login_block.php', -9: 'https://member.afreecatv.com/app/pop_login_block.php',
-11: 'https://login.sooplive.co.kr/afreeca/second_login.php', -11: 'https://login.afreecatv.com/afreeca/second_login.php',
-12: 'https://member.sooplive.co.kr/app/user_security.php', -12: 'https://member.afreecatv.com/app/user_security.php',
0: 'The username does not exist or you have entered the wrong password.', 0: 'The username does not exist or you have entered the wrong password.',
-1: 'The username does not exist or you have entered the wrong password.', -1: 'The username does not exist or you have entered the wrong password.',
-3: 'You have entered your username/password incorrectly.', -3: 'You have entered your username/password incorrectly.',
-7: 'You cannot use your Global Soop account to access Korean Soop.', -7: 'You cannot use your Global AfreecaTV account to access Korean AfreecaTV.',
-10: 'Sorry for the inconvenience. \nYour account has been blocked due to an unauthorized access. \nPlease contact our Help Center for assistance.', -10: 'Sorry for the inconvenience. \nYour account has been blocked due to an unauthorized access. \nPlease contact our Help Center for assistance.',
-32008: 'You have failed to log in. Please contact our Help Center.', -32008: 'You have failed to log in. Please contact our Help Center.',
} }
@@ -57,206 +173,169 @@ class AfreecaTVBaseIE(InfoExtractor):
if result != 1: if result != 1:
error = _ERRORS.get(result, 'You have failed to log in.') error = _ERRORS.get(result, 'You have failed to log in.')
raise ExtractorError( raise ExtractorError(
f'Unable to login: {self.IE_NAME} said: {error}', 'Unable to login: %s said: %s' % (self.IE_NAME, error),
expected=True) expected=True)
def _call_api(self, endpoint, display_id, data=None, headers=None, query=None):
return self._download_json(Request(
f'https://api.m.sooplive.co.kr/{endpoint}',
data=data, headers=headers, query=query,
extensions={'legacy_ssl': True}), display_id,
'Downloading API JSON', 'Unable to download API JSON')
@staticmethod
def _fixup_thumb(thumb_url):
if not url_or_none(thumb_url):
return None
# Core would determine_ext as 'php' from the url, so we need to provide the real ext
# See: https://github.com/yt-dlp/yt-dlp/issues/11537
return [{'url': thumb_url, 'ext': 'jpg'}]
class AfreecaTVIE(AfreecaTVBaseIE):
IE_NAME = 'soop'
IE_DESC = 'sooplive.co.kr'
_VALID_URL = r'https?://vod\.(?:sooplive\.co\.kr|afreecatv\.com)/(?:PLAYER/STATION|player)/(?P<id>\d+)/?(?:$|[?#&])'
_TESTS = [{
'url': 'https://vod.sooplive.co.kr/player/96753363',
'info_dict': {
'id': '20230108_9FF5BEE1_244432674_1',
'ext': 'mp4',
'uploader_id': 'rlantnghks',
'uploader': '페이즈으',
'duration': 10840,
'thumbnail': r're:https?://videoimg\.(?:sooplive\.co\.kr|afreecatv\.com)/.+',
'upload_date': '20230108',
'timestamp': 1673186405,
'title': '젠지 페이즈',
},
'params': {
'skip_download': True,
},
}, {
# non standard key
'url': 'http://vod.sooplive.co.kr/PLAYER/STATION/20515605',
'info_dict': {
'id': '20170411_BE689A0E_190960999_1_2_h',
'ext': 'mp4',
'title': '혼자사는여자집',
'thumbnail': r're:https?://(?:video|st)img\.(?:sooplive\.co\.kr|afreecatv\.com)/.+',
'uploader': '♥이슬이',
'uploader_id': 'dasl8121',
'upload_date': '20170411',
'timestamp': 1491929865,
'duration': 213,
},
'params': {
'skip_download': True,
},
}, {
# adult content
'url': 'https://vod.sooplive.co.kr/player/97267690',
'info_dict': {
'id': '20180327_27901457_202289533_1',
'ext': 'mp4',
'title': '[생]빨개요♥ (part 1)',
'thumbnail': r're:https?://(?:video|st)img\.(?:sooplive\.co\.kr|afreecatv\.com)/.+',
'uploader': '[SA]서아',
'uploader_id': 'bjdyrksu',
'upload_date': '20180327',
'duration': 3601,
},
'params': {
'skip_download': True,
},
'skip': 'The VOD does not exist',
}, {
# adult content
'url': 'https://vod.sooplive.co.kr/player/70395877',
'only_matching': True,
}, {
# subscribers only
'url': 'https://vod.sooplive.co.kr/player/104647403',
'only_matching': True,
}, {
# private
'url': 'https://vod.sooplive.co.kr/player/81669846',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
data = self._call_api(
'station/video/a/view', video_id, headers={'Referer': url}, partial_view = False
data=urlencode_postdata({ adult_view = False
for _ in range(2):
data = self._download_json(
'https://api.m.afreecatv.com/station/video/a/view',
video_id, headers={'Referer': url}, data=urlencode_postdata({
'nTitleNo': video_id,
'nApiLevel': 10,
}))['data']
if traverse_obj(data, ('code', {int})) == -6221:
raise ExtractorError('The VOD does not exist', expected=True)
query = {
'nTitleNo': video_id, 'nTitleNo': video_id,
'nApiLevel': 10, 'nStationNo': data['station_no'],
}))['data'] 'nBbsNo': data['bbs_no'],
}
if partial_view:
query['partialView'] = 'SKIP_ADULT'
if adult_view:
query['adultView'] = 'ADULT_VIEW'
video_xml = self._download_xml(
'http://afbbs.afreecatv.com:8080/api/video/get_video_info.php',
video_id, 'Downloading video info XML%s'
% (' (skipping adult)' if partial_view else ''),
video_id, headers={
'Referer': url,
}, query=query)
error_code = traverse_obj(data, ('code', {int})) flag = xpath_text(video_xml, './track/flag', 'flag', default=None)
if error_code == -6221: if flag and flag == 'SUCCEED':
raise ExtractorError('The VOD does not exist', expected=True) break
elif error_code == -6205: if flag == 'PARTIAL_ADULT':
raise ExtractorError('This VOD is private', expected=True) self.report_warning(
'In accordance with local laws and regulations, underage users are restricted from watching adult content. '
'Only content suitable for all ages will be downloaded. '
'Provide account credentials if you wish to download restricted content.')
partial_view = True
continue
elif flag == 'ADULT':
if not adult_view:
adult_view = True
continue
error = 'Only users older than 19 are able to watch this video. Provide account credentials to download this content.'
else:
error = flag
raise ExtractorError(
'%s said: %s' % (self.IE_NAME, error), expected=True)
else:
raise ExtractorError('Unable to download video info')
common_info = traverse_obj(data, { video_element = video_xml.findall('./track/video')[-1]
'title': ('title', {str}), if video_element is None or video_element.text is None:
'uploader': ('writer_nick', {str}), raise ExtractorError(
'uploader_id': ('bj_id', {str}), 'Video %s does not exist' % video_id, expected=True)
'duration': ('total_file_duration', {int_or_none(scale=1000)}),
'thumbnails': ('thumb', {self._fixup_thumb}), video_url = video_element.text.strip()
title = xpath_text(video_xml, './track/title', 'title', fatal=True)
uploader = xpath_text(video_xml, './track/nickname', 'uploader')
uploader_id = xpath_text(video_xml, './track/bj_id', 'uploader id')
duration = int_or_none(xpath_text(
video_xml, './track/duration', 'duration'))
thumbnail = xpath_text(video_xml, './track/titleImage', 'thumbnail')
common_entry = {
'uploader': uploader,
'uploader_id': uploader_id,
'thumbnail': thumbnail,
}
info = common_entry.copy()
info.update({
'id': video_id,
'title': title,
'duration': duration,
}) })
entries = [] if not video_url:
for file_num, file_element in enumerate( entries = []
traverse_obj(data, ('files', lambda _, v: url_or_none(v['file']))), start=1): file_elements = video_element.findall('./file')
file_url = file_element['file'] one = len(file_elements) == 1
if determine_ext(file_url) == 'm3u8': for file_num, file_element in enumerate(file_elements, start=1):
formats = self._extract_m3u8_formats( file_url = url_or_none(file_element.text)
file_url, video_id, 'mp4', m3u8_id='hls', if not file_url:
note=f'Downloading part {file_num} m3u8 information') continue
else: key = file_element.get('key', '')
formats = [{ upload_date = unified_strdate(self._search_regex(
'url': file_url, r'^(\d{8})_', key, 'upload date', default=None))
'format_id': 'http', if upload_date is not None:
}] # sometimes the upload date isn't included in the file name
# instead, another random ID is, which may parse as a valid
# date but be wildly out of a reasonable range
parsed_date = date_from_str(upload_date)
if parsed_date.year < 2000 or parsed_date.year >= 2100:
upload_date = None
file_duration = int_or_none(file_element.get('duration'))
format_id = key if key else '%s_%s' % (video_id, file_num)
if determine_ext(file_url) == 'm3u8':
formats = self._extract_m3u8_formats(
file_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls',
note='Downloading part %d m3u8 information' % file_num)
else:
formats = [{
'url': file_url,
'format_id': 'http',
}]
if not formats and not self.get_param('ignore_no_formats'):
continue
file_info = common_entry.copy()
file_info.update({
'id': format_id,
'title': title if one else '%s (part %d)' % (title, file_num),
'upload_date': upload_date,
'duration': file_duration,
'formats': formats,
})
entries.append(file_info)
entries_info = info.copy()
entries_info.update({
'_type': 'multi_video',
'entries': entries,
})
return entries_info
entries.append({ info = {
**common_info, 'id': video_id,
'id': file_element.get('file_info_key') or f'{video_id}_{file_num}', 'title': title,
'title': f'{common_info.get("title") or "Untitled"} (part {file_num})', 'uploader': uploader,
'formats': formats, 'uploader_id': uploader_id,
**traverse_obj(file_element, { 'duration': duration,
'duration': ('duration', {int_or_none(scale=1000)}), 'thumbnail': thumbnail,
'timestamp': ('file_start', {parse_iso8601(delimiter=' ', timezone=dt.timedelta(hours=9))}), }
}),
if determine_ext(video_url) == 'm3u8':
info['formats'] = self._extract_m3u8_formats(
video_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls')
else:
app, playpath = video_url.split('mp4:')
info.update({
'url': app,
'ext': 'flv',
'play_path': 'mp4:' + playpath,
'rtmp_live': True, # downloading won't end without this
}) })
if traverse_obj(data, ('adult_status', {str})) == 'notLogin': return info
if not entries:
self.raise_login_required(
'Only users older than 19 are able to watch this video', method='password')
self.report_warning(
'In accordance with local laws and regulations, underage users are '
'restricted from watching adult content. Only content suitable for all '
f'ages will be downloaded. {self._login_hint("password")}')
if not entries and traverse_obj(data, ('sub_upload_type', {str})):
self.raise_login_required('This VOD is for subscribers only', method='password')
if len(entries) == 1:
return {
**entries[0],
'title': common_info.get('title'),
}
common_info['timestamp'] = traverse_obj(entries, (..., 'timestamp'), get_all=False)
return self.playlist_result(entries, video_id, multi_video=True, **common_info)
class AfreecaTVCatchStoryIE(AfreecaTVBaseIE): class AfreecaTVLiveIE(AfreecaTVIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'soop:catchstory'
IE_DESC = 'sooplive.co.kr catch story' IE_NAME = 'afreecatv:live'
_VALID_URL = r'https?://vod\.(?:sooplive\.co\.kr|afreecatv\.com)/player/(?P<id>\d+)/catchstory' _VALID_URL = r'https?://play\.afreeca(?:tv)?\.com/(?P<id>[^/]+)(?:/(?P<bno>\d+))?'
_TESTS = [{ _TESTS = [{
'url': 'https://vod.sooplive.co.kr/player/103247/catchstory', 'url': 'https://play.afreecatv.com/pyh3646/237852185',
'info_dict': {
'id': '103247',
},
'playlist_count': 2,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
data = self._call_api(
'catchstory/a/view', video_id, headers={'Referer': url},
query={'aStoryListIdx': '', 'nStoryIdx': video_id})
return self.playlist_result(self._entries(data), video_id)
def _entries(self, data):
# 'files' is always a list with 1 element
yield from traverse_obj(data, (
'data', lambda _, v: v['story_type'] == 'catch',
'catch_list', lambda _, v: v['files'][0]['file'], {
'id': ('files', 0, 'file_info_key', {str}),
'url': ('files', 0, 'file', {url_or_none}),
'duration': ('files', 0, 'duration', {int_or_none(scale=1000)}),
'title': ('title', {str}),
'uploader': ('writer_nick', {str}),
'uploader_id': ('writer_id', {str}),
'thumbnails': ('thumb', {self._fixup_thumb}),
'timestamp': ('write_timestamp', {int_or_none}),
}))
class AfreecaTVLiveIE(AfreecaTVBaseIE):
IE_NAME = 'soop:live'
IE_DESC = 'sooplive.co.kr livestreams'
_VALID_URL = r'https?://play\.(?:sooplive\.co\.kr|afreecatv\.com)/(?P<id>[^/?#]+)(?:/(?P<bno>\d+))?'
_TESTS = [{
'url': 'https://play.sooplive.co.kr/pyh3646/237852185',
'info_dict': { 'info_dict': {
'id': '237852185', 'id': '237852185',
'ext': 'mp4', 'ext': 'mp4',
@@ -268,121 +347,94 @@ class AfreecaTVLiveIE(AfreecaTVBaseIE):
}, },
'skip': 'Livestream has ended', 'skip': 'Livestream has ended',
}, { }, {
'url': 'https://play.sooplive.co.kr/pyh3646/237852185', 'url': 'http://play.afreeca.com/pyh3646/237852185',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://play.sooplive.co.kr/pyh3646', 'url': 'http://play.afreeca.com/pyh3646',
'only_matching': True, 'only_matching': True,
}] }]
_LIVE_API_URL = 'https://live.sooplive.co.kr/afreeca/player_live_api.php' _LIVE_API_URL = 'https://live.afreecatv.com/afreeca/player_live_api.php'
_WORKING_CDNS = [
'gcp_cdn', # live-global-cdn-v02.sooplive.co.kr
'gs_cdn_pc_app', # pc-app.stream.sooplive.co.kr
'gs_cdn_mobile_web', # mobile-web.stream.sooplive.co.kr
'gs_cdn_pc_web', # pc-web.stream.sooplive.co.kr
]
_BAD_CDNS = [
'gs_cdn', # chromecast.afreeca.gscdn.com (cannot resolve)
'gs_cdn_chromecast', # chromecast.stream.sooplive.co.kr (HTTP Error 400)
'azure_cdn', # live-global-cdn-v01.sooplive.co.kr (cannot resolve)
'aws_cf', # live-global-cdn-v03.sooplive.co.kr (cannot resolve)
'kt_cdn', # kt.stream.sooplive.co.kr (HTTP Error 400)
]
def _extract_formats(self, channel_info, broadcast_no, aid): _QUALITIES = ('sd', 'hd', 'hd2k', 'original')
stream_base_url = channel_info.get('RMD') or 'https://livestream-manager.sooplive.co.kr'
# If user has not passed CDN IDs, try API-provided CDN ID followed by other working CDN IDs
default_cdn_ids = orderedSet([
*traverse_obj(channel_info, ('CDN', {str}, all, lambda _, v: v not in self._BAD_CDNS)),
*self._WORKING_CDNS,
])
cdn_ids = self._configuration_arg('cdn', default_cdn_ids)
for attempt, cdn_id in enumerate(cdn_ids, start=1):
m3u8_url = traverse_obj(self._download_json(
urljoin(stream_base_url, 'broad_stream_assign.html'), broadcast_no,
f'Downloading {cdn_id} stream info', f'Unable to download {cdn_id} stream info',
fatal=False, query={
'return_type': cdn_id,
'broad_key': f'{broadcast_no}-common-master-hls',
}), ('view_url', {url_or_none}))
try:
return self._extract_m3u8_formats(
m3u8_url, broadcast_no, 'mp4', m3u8_id='hls', query={'aid': aid},
headers={'Referer': 'https://play.sooplive.co.kr/'})
except ExtractorError as e:
if attempt == len(cdn_ids):
raise
self.report_warning(
f'{e.cause or e.msg}. Retrying... (attempt {attempt} of {len(cdn_ids)})')
def _real_extract(self, url): def _real_extract(self, url):
broadcaster_id, broadcast_no = self._match_valid_url(url).group('id', 'bno') broadcaster_id, broadcast_no = self._match_valid_url(url).group('id', 'bno')
channel_info = traverse_obj(self._download_json( password = self.get_param('videopassword')
self._LIVE_API_URL, broadcaster_id, data=urlencode_postdata({'bid': broadcaster_id})),
('CHANNEL', {dict})) or {}
info = self._download_json(self._LIVE_API_URL, broadcaster_id, fatal=False,
data=urlencode_postdata({'bid': broadcaster_id})) or {}
channel_info = info.get('CHANNEL') or {}
broadcaster_id = channel_info.get('BJID') or broadcaster_id broadcaster_id = channel_info.get('BJID') or broadcaster_id
broadcast_no = channel_info.get('BNO') or broadcast_no broadcast_no = channel_info.get('BNO') or broadcast_no
password_protected = channel_info.get('BPWD')
if not broadcast_no: if not broadcast_no:
result = channel_info.get('RESULT') raise ExtractorError(f'Unable to extract broadcast number ({broadcaster_id} may not be live)', expected=True)
if result == 0: if password_protected == 'Y' and password is None:
raise UserNotLive(video_id=broadcaster_id)
elif result == -6:
self.raise_login_required(
'This channel is streaming for subscribers only', method='password')
raise ExtractorError('Unable to extract broadcast number')
password = self.get_param('videopassword')
if channel_info.get('BPWD') == 'Y' and password is None:
raise ExtractorError( raise ExtractorError(
'This livestream is protected by a password, use the --video-password option', 'This livestream is protected by a password, use the --video-password option',
expected=True) expected=True)
token_info = traverse_obj(self._download_json( formats = []
self._LIVE_API_URL, broadcast_no, 'Downloading access token for stream', quality_key = qualities(self._QUALITIES)
'Unable to download access token for stream', data=urlencode_postdata(filter_dict({ for quality_str in self._QUALITIES:
params = {
'bno': broadcast_no, 'bno': broadcast_no,
'stream_type': 'common', 'stream_type': 'common',
'type': 'aid', 'type': 'aid',
'quality': 'master', 'quality': quality_str,
'pwd': password, }
}))), ('CHANNEL', {dict})) or {} if password is not None:
aid = token_info.get('AID') params['pwd'] = password
if not aid: aid_response = self._download_json(
result = token_info.get('RESULT') self._LIVE_API_URL, broadcast_no, fatal=False,
if result == 0: data=urlencode_postdata(params),
raise ExtractorError('This livestream has ended', expected=True) note=f'Downloading access token for {quality_str} stream',
elif result == -6: errnote=f'Unable to download access token for {quality_str} stream')
self.raise_login_required('This livestream is for subscribers only', method='password') aid = traverse_obj(aid_response, ('CHANNEL', 'AID'))
raise ExtractorError('Unable to extract access token') if not aid:
continue
formats = self._extract_formats(channel_info, broadcast_no, aid) stream_base_url = channel_info.get('RMD') or 'https://livestream-manager.afreecatv.com'
stream_info = self._download_json(
f'{stream_base_url}/broad_stream_assign.html', broadcast_no, fatal=False,
query={
'return_type': channel_info.get('CDN', 'gcp_cdn'),
'broad_key': f'{broadcast_no}-common-{quality_str}-hls',
},
note=f'Downloading metadata for {quality_str} stream',
errnote=f'Unable to download metadata for {quality_str} stream') or {}
station_info = traverse_obj(self._download_json( if stream_info.get('view_url'):
'https://st.sooplive.co.kr/api/get_station_status.php', broadcast_no, formats.append({
'Downloading channel metadata', 'Unable to download channel metadata', 'format_id': quality_str,
query={'szBjId': broadcaster_id}, fatal=False), {dict}) or {} 'url': update_url_query(stream_info['view_url'], {'aid': aid}),
'ext': 'mp4',
'protocol': 'm3u8',
'quality': quality_key(quality_str),
})
station_info = self._download_json(
'https://st.afreecatv.com/api/get_station_status.php', broadcast_no,
query={'szBjId': broadcaster_id}, fatal=False,
note='Downloading channel metadata', errnote='Unable to download channel metadata') or {}
return { return {
'id': broadcast_no, 'id': broadcast_no,
'title': channel_info.get('TITLE') or station_info.get('station_title'), 'title': channel_info.get('TITLE') or station_info.get('station_title'),
'uploader': channel_info.get('BJNICK') or station_info.get('station_name'), 'uploader': channel_info.get('BJNICK') or station_info.get('station_name'),
'uploader_id': broadcaster_id, 'uploader_id': broadcaster_id,
'timestamp': parse_iso8601(station_info.get('broad_start'), delimiter=' ', timezone=dt.timedelta(hours=9)), 'timestamp': unified_timestamp(station_info.get('broad_start')),
'formats': formats, 'formats': formats,
'is_live': True, 'is_live': True,
'http_headers': {'Referer': url},
} }
class AfreecaTVUserIE(AfreecaTVBaseIE): class AfreecaTVUserIE(InfoExtractor):
IE_NAME = 'soop:user' IE_NAME = 'afreecatv:user'
_VALID_URL = r'https?://ch\.(?:sooplive\.co\.kr|afreecatv\.com)/(?P<id>[^/?#]+)/vods/?(?P<slug_type>[^/?#]+)?' _VALID_URL = r'https?://bj\.afreeca(?:tv)?\.com/(?P<id>[^/]+)/vods/?(?P<slug_type>[^/]+)?'
_TESTS = [{ _TESTS = [{
'url': 'https://ch.sooplive.co.kr/ryuryu24/vods/review', 'url': 'https://bj.afreecatv.com/ryuryu24/vods/review',
'info_dict': { 'info_dict': {
'_type': 'playlist', '_type': 'playlist',
'id': 'ryuryu24', 'id': 'ryuryu24',
@@ -390,7 +442,7 @@ class AfreecaTVUserIE(AfreecaTVBaseIE):
}, },
'playlist_count': 218, 'playlist_count': 218,
}, { }, {
'url': 'https://ch.sooplive.co.kr/parang1995/vods/highlight', 'url': 'https://bj.afreecatv.com/parang1995/vods/highlight',
'info_dict': { 'info_dict': {
'_type': 'playlist', '_type': 'playlist',
'id': 'parang1995', 'id': 'parang1995',
@@ -398,7 +450,7 @@ class AfreecaTVUserIE(AfreecaTVBaseIE):
}, },
'playlist_count': 997, 'playlist_count': 997,
}, { }, {
'url': 'https://ch.sooplive.co.kr/ryuryu24/vods', 'url': 'https://bj.afreecatv.com/ryuryu24/vods',
'info_dict': { 'info_dict': {
'_type': 'playlist', '_type': 'playlist',
'id': 'ryuryu24', 'id': 'ryuryu24',
@@ -406,7 +458,7 @@ class AfreecaTVUserIE(AfreecaTVBaseIE):
}, },
'playlist_count': 221, 'playlist_count': 221,
}, { }, {
'url': 'https://ch.sooplive.co.kr/ryuryu24/vods/balloonclip', 'url': 'https://bj.afreecatv.com/ryuryu24/vods/balloonclip',
'info_dict': { 'info_dict': {
'_type': 'playlist', '_type': 'playlist',
'id': 'ryuryu24', 'id': 'ryuryu24',
@@ -418,12 +470,12 @@ class AfreecaTVUserIE(AfreecaTVBaseIE):
def _fetch_page(self, user_id, user_type, page): def _fetch_page(self, user_id, user_type, page):
page += 1 page += 1
info = self._download_json(f'https://chapi.sooplive.co.kr/api/{user_id}/vods/{user_type}', user_id, info = self._download_json(f'https://bjapi.afreecatv.com/api/{user_id}/vods/{user_type}', user_id,
query={'page': page, 'per_page': self._PER_PAGE, 'orderby': 'reg_date'}, query={'page': page, 'per_page': self._PER_PAGE, 'orderby': 'reg_date'},
note=f'Downloading {user_type} video page {page}') note=f'Downloading {user_type} video page {page}')
for item in info['data']: for item in info['data']:
yield self.url_result( yield self.url_result(
f'https://vod.sooplive.co.kr/player/{item["title_no"]}/', AfreecaTVIE, item['title_no']) f'https://vod.afreecatv.com/player/{item["title_no"]}/', AfreecaTVIE, item['title_no'])
def _real_extract(self, url): def _real_extract(self, url):
user_id, user_type = self._match_valid_url(url).group('id', 'slug_type') user_id, user_type = self._match_valid_url(url).group('id', 'slug_type')

View File

@@ -146,7 +146,7 @@ class TokFMPodcastIE(InfoExtractor):
'url': 'https://audycje.tokfm.pl/podcast/91275,-Systemowy-rasizm-Czy-zamieszki-w-USA-po-morderstwie-w-Minneapolis-doprowadza-do-zmian-w-sluzbach-panstwowych', 'url': 'https://audycje.tokfm.pl/podcast/91275,-Systemowy-rasizm-Czy-zamieszki-w-USA-po-morderstwie-w-Minneapolis-doprowadza-do-zmian-w-sluzbach-panstwowych',
'info_dict': { 'info_dict': {
'id': '91275', 'id': '91275',
'ext': 'mp3', 'ext': 'aac',
'title': 'md5:a9b15488009065556900169fb8061cce', 'title': 'md5:a9b15488009065556900169fb8061cce',
'episode': 'md5:a9b15488009065556900169fb8061cce', 'episode': 'md5:a9b15488009065556900169fb8061cce',
'series': 'Analizy', 'series': 'Analizy',
@@ -164,20 +164,23 @@ class TokFMPodcastIE(InfoExtractor):
raise ExtractorError('No such podcast', expected=True) raise ExtractorError('No such podcast', expected=True)
metadata = metadata[0] metadata = metadata[0]
mp3_url = self._download_json( formats = []
'https://api.podcast.radioagora.pl/api4/getSongUrl', for ext in ('aac', 'mp3'):
media_id, 'Downloading podcast mp3 URL', query={ url_data = self._download_json(
'podcast_id': media_id, f'https://api.podcast.radioagora.pl/api4/getSongUrl?podcast_id={media_id}&device_id={uuid.uuid4()}&ppre=false&audio={ext}',
'device_id': str(uuid.uuid4()), media_id, 'Downloading podcast %s URL' % ext)
'ppre': 'false', # prevents inserting the mp3 (default) multiple times
'audio': 'mp3', if 'link_ssl' in url_data and f'.{ext}' in url_data['link_ssl']:
})['link_ssl'] formats.append({
'url': url_data['link_ssl'],
'ext': ext,
'vcodec': 'none',
'acodec': ext,
})
return { return {
'id': media_id, 'id': media_id,
'url': mp3_url, 'formats': formats,
'vcodec': 'none',
'ext': 'mp3',
'title': metadata.get('podcast_name'), 'title': metadata.get('podcast_name'),
'series': metadata.get('series_name'), 'series': metadata.get('series_name'),
'episode': metadata.get('podcast_name'), 'episode': metadata.get('podcast_name'),
@@ -203,8 +206,8 @@ class TokFMAuditionIE(InfoExtractor):
} }
@staticmethod @staticmethod
def _create_url(video_id): def _create_url(id):
return f'https://audycje.tokfm.pl/audycja/{video_id}' return f'https://audycje.tokfm.pl/audycja/{id}'
def _real_extract(self, url): def _real_extract(self, url):
audition_id = self._match_id(url) audition_id = self._match_id(url)

View File

@@ -0,0 +1,63 @@
import re
from .common import InfoExtractor
from ..utils import (
int_or_none,
parse_duration,
parse_iso8601,
)
class AirMozillaIE(InfoExtractor):
_VALID_URL = r'https?://air\.mozilla\.org/(?P<id>[0-9a-z-]+)/?'
_TEST = {
'url': 'https://air.mozilla.org/privacy-lab-a-meetup-for-privacy-minded-people-in-san-francisco/',
'md5': '8d02f53ee39cf006009180e21df1f3ba',
'info_dict': {
'id': '6x4q2w',
'ext': 'mp4',
'title': 'Privacy Lab - a meetup for privacy minded people in San Francisco',
'thumbnail': r're:https?://.*/poster\.jpg',
'description': 'Brings together privacy professionals and others interested in privacy at for-profits, non-profits, and NGOs in an effort to contribute to the state of the ecosystem...',
'timestamp': 1422487800,
'upload_date': '20150128',
'location': 'SFO Commons',
'duration': 3780,
'view_count': int,
'categories': ['Main', 'Privacy'],
}
}
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
video_id = self._html_search_regex(r'//vid\.ly/(.*?)/embed', webpage, 'id')
embed_script = self._download_webpage('https://vid.ly/{0}/embed'.format(video_id), video_id)
jwconfig = self._parse_json(self._search_regex(
r'initCallback\((.*)\);', embed_script, 'metadata'), video_id)['config']
info_dict = self._parse_jwplayer_data(jwconfig, video_id)
view_count = int_or_none(self._html_search_regex(
r'Views since archived: ([0-9]+)',
webpage, 'view count', fatal=False))
timestamp = parse_iso8601(self._html_search_regex(
r'<time datetime="(.*?)"', webpage, 'timestamp', fatal=False))
duration = parse_duration(self._search_regex(
r'Duration:\s*(\d+\s*hours?\s*\d+\s*minutes?)',
webpage, 'duration', fatal=False))
info_dict.update({
'id': video_id,
'title': self._og_search_title(webpage),
'url': self._og_search_url(webpage),
'display_id': display_id,
'description': self._og_search_description(webpage),
'timestamp': timestamp,
'location': self._html_search_regex(r'Location: (.*)', webpage, 'location', default=None),
'duration': duration,
'view_count': view_count,
'categories': re.findall(r'<a href=".*?" class="channel">(.*?)</a>', webpage),
})
return info_dict

View File

@@ -5,7 +5,7 @@ from ..utils import (
int_or_none, int_or_none,
mimetype2ext, mimetype2ext,
parse_iso8601, parse_iso8601,
traverse_obj, traverse_obj
) )
@@ -26,7 +26,7 @@ class AirTVIE(InfoExtractor):
'view_count': int, 'view_count': int,
'thumbnail': 'https://cdn-sp-gcs.air.tv/videos/W/8/W87jcWleSn2hXZN47zJZsQ/b13fc56464f47d9d62a36d110b9b5a72-4096x2160_9.jpg', 'thumbnail': 'https://cdn-sp-gcs.air.tv/videos/W/8/W87jcWleSn2hXZN47zJZsQ/b13fc56464f47d9d62a36d110b9b5a72-4096x2160_9.jpg',
'timestamp': 1664792603, 'timestamp': 1664792603,
}, }
}, { }, {
# with youtube_id # with youtube_id
'url': 'https://www.air.tv/watch?v=sv57EC8tRXG6h8dNXFUU1Q', 'url': 'https://www.air.tv/watch?v=sv57EC8tRXG6h8dNXFUU1Q',
@@ -54,7 +54,7 @@ class AirTVIE(InfoExtractor):
'channel': 'Newsflare', 'channel': 'Newsflare',
'duration': 37, 'duration': 37,
'upload_date': '20180511', 'upload_date': '20180511',
}, }
}] }]
def _get_formats_and_subtitle(self, json_data, video_id): def _get_formats_and_subtitle(self, json_data, video_id):

View File

@@ -22,7 +22,7 @@ class AitubeKZVideoIE(InfoExtractor):
'timestamp': 1667370519, 'timestamp': 1667370519,
'title': 'Ангел хранитель 1 серия', 'title': 'Ангел хранитель 1 серия',
'channel_follower_count': int, 'channel_follower_count': int,
}, }
}, { }, {
# embed url # embed url
'url': 'https://aitube.kz/embed/?id=9291d29b-c038-49a1-ad42-3da2051d353c', 'url': 'https://aitube.kz/embed/?id=9291d29b-c038-49a1-ad42-3da2051d353c',

View File

@@ -1,4 +1,5 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
float_or_none, float_or_none,
try_get, try_get,
@@ -43,7 +44,7 @@ class AliExpressLiveIE(InfoExtractor):
'title': title, 'title': title,
'thumbnail': data.get('coverUrl'), 'thumbnail': data.get('coverUrl'),
'uploader': try_get( 'uploader': try_get(
data, lambda x: x['followBar']['name'], str), data, lambda x: x['followBar']['name'], compat_str),
'timestamp': float_or_none(data.get('startTimeLong'), scale=1000), 'timestamp': float_or_none(data.get('startTimeLong'), scale=1000),
'formats': formats, 'formats': formats,
} }

View File

@@ -18,7 +18,7 @@ class AlJazeeraIE(InfoExtractor):
'timestamp': 1636219149, 'timestamp': 1636219149,
'description': 'U sarajevskim naseljima Rajlovac i Reljevo stambeni objekti, ali i industrijska postrojenja i dalje su pod vodom.', 'description': 'U sarajevskim naseljima Rajlovac i Reljevo stambeni objekti, ali i industrijska postrojenja i dalje su pod vodom.',
'upload_date': '20211106', 'upload_date': '20211106',
}, }
}, { }, {
'url': 'https://balkans.aljazeera.net/videos/2021/11/6/djokovic-usao-u-finale-mastersa-u-parizu', 'url': 'https://balkans.aljazeera.net/videos/2021/11/6/djokovic-usao-u-finale-mastersa-u-parizu',
'info_dict': { 'info_dict': {
@@ -33,7 +33,7 @@ class AlJazeeraIE(InfoExtractor):
BRIGHTCOVE_URL_RE = r'https?://players.brightcove.net/(?P<account>\d+)/(?P<player_id>[a-zA-Z0-9]+)_(?P<embed>[^/]+)/index.html\?videoId=(?P<id>\d+)' BRIGHTCOVE_URL_RE = r'https?://players.brightcove.net/(?P<account>\d+)/(?P<player_id>[a-zA-Z0-9]+)_(?P<embed>[^/]+)/index.html\?videoId=(?P<id>\d+)'
def _real_extract(self, url): def _real_extract(self, url):
base, post_type, display_id = self._match_valid_url(url).groups() base, post_type, id = self._match_valid_url(url).groups()
wp = { wp = {
'balkans.aljazeera.net': 'ajb', 'balkans.aljazeera.net': 'ajb',
'chinese.aljazeera.net': 'chinese', 'chinese.aljazeera.net': 'chinese',
@@ -47,11 +47,11 @@ class AlJazeeraIE(InfoExtractor):
'news': 'news', 'news': 'news',
}[post_type.split('/')[0]] }[post_type.split('/')[0]]
video = self._download_json( video = self._download_json(
f'https://{base}/graphql', display_id, query={ f'https://{base}/graphql', id, query={
'wp-site': wp, 'wp-site': wp,
'operationName': 'ArchipelagoSingleArticleQuery', 'operationName': 'ArchipelagoSingleArticleQuery',
'variables': json.dumps({ 'variables': json.dumps({
'name': display_id, 'name': id,
'postType': post_type, 'postType': post_type,
}), }),
}, headers={ }, headers={
@@ -64,7 +64,7 @@ class AlJazeeraIE(InfoExtractor):
embed = 'default' embed = 'default'
if video_id is None: if video_id is None:
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, id)
account, player_id, embed, video_id = self._search_regex(self.BRIGHTCOVE_URL_RE, webpage, 'video id', account, player_id, embed, video_id = self._search_regex(self.BRIGHTCOVE_URL_RE, webpage, 'video id',
group=(1, 2, 3, 4), default=(None, None, None, None)) group=(1, 2, 3, 4), default=(None, None, None, None))
@@ -73,11 +73,11 @@ class AlJazeeraIE(InfoExtractor):
return { return {
'_type': 'url_transparent', '_type': 'url_transparent',
'url': url, 'url': url,
'ie_key': 'Generic', 'ie_key': 'Generic'
} }
return { return {
'_type': 'url_transparent', '_type': 'url_transparent',
'url': f'https://players.brightcove.net/{account}/{player_id}_{embed}/index.html?videoId={video_id}', 'url': f'https://players.brightcove.net/{account}/{player_id}_{embed}/index.html?videoId={video_id}',
'ie_key': 'BrightcoveNew', 'ie_key': 'BrightcoveNew'
} }

View File

@@ -1,4 +1,5 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
qualities, qualities,
@@ -94,11 +95,11 @@ class AllocineIE(InfoExtractor):
duration = int_or_none(video.get('duration')) duration = int_or_none(video.get('duration'))
view_count = int_or_none(video.get('view_count')) view_count = int_or_none(video.get('view_count'))
timestamp = unified_timestamp(try_get( timestamp = unified_timestamp(try_get(
video, lambda x: x['added_at']['date'], str)) video, lambda x: x['added_at']['date'], compat_str))
else: else:
video_id = display_id video_id = display_id
media_data = self._download_json( media_data = self._download_json(
f'http://www.allocine.fr/ws/AcVisiondataV5.ashx?media={video_id}', display_id) 'http://www.allocine.fr/ws/AcVisiondataV5.ashx?media=%s' % video_id, display_id)
title = remove_end(strip_or_none(self._html_extract_title(webpage), ' - AlloCiné')) title = remove_end(strip_or_none(self._html_extract_title(webpage), ' - AlloCiné'))
for key, value in media_data['video'].items(): for key, value in media_data['video'].items():
if not key.endswith('Path'): if not key.endswith('Path'):

View File

@@ -1,252 +0,0 @@
import functools
import json
from .common import InfoExtractor
from ..utils import (
ExtractorError,
OnDemandPagedList,
int_or_none,
join_nonempty,
parse_qs,
urljoin,
)
from ..utils.traversal import traverse_obj
_FIELDS = '''
_id
clipImageSource
clipImageThumb
clipLink
clipTitle
createdDate
shareId
user { _id }
username
views'''
_EXTRA_FIELDS = '''
clipLength
clipSizeBytes'''
_QUERIES = {
'clip': '''query ($id: String!) {
video: getClip(clipIdentifier: $id) {
%s %s
}
}''' % (_FIELDS, _EXTRA_FIELDS), # noqa: UP031
'montage': '''query ($id: String!) {
video: getMontage(clipIdentifier: $id) {
%s
}
}''' % _FIELDS, # noqa: UP031
'Clips': '''query ($page: Int!, $user: String!, $game: Int) {
videos: clips(search: createdDate, page: $page, user: $user, mobile: false, game: $game) {
data { %s %s }
}
}''' % (_FIELDS, _EXTRA_FIELDS), # noqa: UP031
'Montages': '''query ($page: Int!, $user: String!) {
videos: montages(search: createdDate, page: $page, user: $user) {
data { %s }
}
}''' % _FIELDS, # noqa: UP031
'Mobile Clips': '''query ($page: Int!, $user: String!) {
videos: clips(search: createdDate, page: $page, user: $user, mobile: true) {
data { %s %s }
}
}''' % (_FIELDS, _EXTRA_FIELDS), # noqa: UP031
}
class AllstarBaseIE(InfoExtractor):
@staticmethod
def _parse_video_data(video_data):
def media_url_or_none(path):
return urljoin('https://media.allstar.gg/', path)
info = traverse_obj(video_data, {
'id': ('_id', {str}),
'display_id': ('shareId', {str}),
'title': ('clipTitle', {str}),
'url': ('clipLink', {media_url_or_none}),
'thumbnails': (('clipImageThumb', 'clipImageSource'), {'url': {media_url_or_none}}),
'duration': ('clipLength', {int_or_none}),
'filesize': ('clipSizeBytes', {int_or_none}),
'timestamp': ('createdDate', {int_or_none(scale=1000)}),
'uploader': ('username', {str}),
'uploader_id': ('user', '_id', {str}),
'view_count': ('views', {int_or_none}),
})
if info.get('id') and info.get('url'):
basename = 'clip' if '/clips/' in info['url'] else 'montage'
info['webpage_url'] = f'https://allstar.gg/{basename}?{basename}={info["id"]}'
info.update({
'extractor_key': AllstarIE.ie_key(),
'extractor': AllstarIE.IE_NAME,
'uploader_url': urljoin('https://allstar.gg/u/', info.get('uploader_id')),
})
return info
def _call_api(self, query, variables, path, video_id=None, note=None):
response = self._download_json(
'https://a1.allstar.gg/graphql', video_id, note=note,
headers={'content-type': 'application/json'},
data=json.dumps({'variables': variables, 'query': query}).encode())
errors = traverse_obj(response, ('errors', ..., 'message', {str}))
if errors:
raise ExtractorError('; '.join(errors))
return traverse_obj(response, path)
class AllstarIE(AllstarBaseIE):
_VALID_URL = r'https?://(?:www\.)?allstar\.gg/(?P<type>(?:clip|montage))\?(?P=type)=(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'https://allstar.gg/clip?clip=64482c2da9eec30008a67d1b',
'info_dict': {
'id': '64482c2da9eec30008a67d1b',
'title': '4K on Inferno',
'url': 'md5:66befb5381eef0c9456026386c25fa55',
'thumbnail': r're:https://media\.allstar\.gg/.+\.(?:png|jpg)$',
'uploader': 'chrk.',
'ext': 'mp4',
'duration': 20,
'filesize': 21199257,
'timestamp': 1682451501,
'uploader_id': '62b8bdfc9021052f7905882d',
'uploader_url': 'https://allstar.gg/u/62b8bdfc9021052f7905882d',
'upload_date': '20230425',
'view_count': int,
},
}, {
'url': 'https://allstar.gg/clip?clip=8LJLY4JKB',
'info_dict': {
'id': '64a1ec6b887f4c0008dc50b8',
'display_id': '8LJLY4JKB',
'title': 'AK-47 3K on Mirage',
'url': 'md5:dde224fd12f035c0e2529a4ae34c4283',
'ext': 'mp4',
'thumbnail': r're:https://media\.allstar\.gg/.+\.(?:png|jpg)$',
'duration': 16,
'filesize': 30175859,
'timestamp': 1688333419,
'uploader': 'cherokee',
'uploader_id': '62b8bdfc9021052f7905882d',
'uploader_url': 'https://allstar.gg/u/62b8bdfc9021052f7905882d',
'upload_date': '20230702',
'view_count': int,
},
}, {
'url': 'https://allstar.gg/montage?montage=643e64089da7e9363e1fa66c',
'info_dict': {
'id': '643e64089da7e9363e1fa66c',
'display_id': 'APQLGM2IMXW',
'title': 'cherokee Rapid Fire Snipers Montage',
'url': 'md5:a3ee356022115db2b27c81321d195945',
'thumbnail': r're:https://media\.allstar\.gg/.+\.(?:png|jpg)$',
'ext': 'mp4',
'timestamp': 1681810448,
'uploader': 'cherokee',
'uploader_id': '62b8bdfc9021052f7905882d',
'uploader_url': 'https://allstar.gg/u/62b8bdfc9021052f7905882d',
'upload_date': '20230418',
'view_count': int,
},
}, {
'url': 'https://allstar.gg/montage?montage=RILJMH6QOS',
'info_dict': {
'id': '64a2697372ce3703de29e868',
'display_id': 'RILJMH6QOS',
'title': 'cherokee Rapid Fire Snipers Montage',
'url': 'md5:d5672e6f88579730c2310a80fdbc4030',
'thumbnail': r're:https://media\.allstar\.gg/.+\.(?:png|jpg)$',
'ext': 'mp4',
'timestamp': 1688365434,
'uploader': 'cherokee',
'uploader_id': '62b8bdfc9021052f7905882d',
'uploader_url': 'https://allstar.gg/u/62b8bdfc9021052f7905882d',
'upload_date': '20230703',
'view_count': int,
},
}]
def _real_extract(self, url):
query_id, video_id = self._match_valid_url(url).group('type', 'id')
return self._parse_video_data(
self._call_api(
_QUERIES.get(query_id), {'id': video_id}, ('data', 'video'), video_id))
class AllstarProfileIE(AllstarBaseIE):
_VALID_URL = r'https?://(?:www\.)?allstar\.gg/(?:profile\?user=|u/)(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'https://allstar.gg/profile?user=62b8bdfc9021052f7905882d',
'info_dict': {
'id': '62b8bdfc9021052f7905882d-clips',
'title': 'cherokee - Clips',
},
'playlist_mincount': 15,
}, {
'url': 'https://allstar.gg/u/cherokee?game=730&view=Clips',
'info_dict': {
'id': '62b8bdfc9021052f7905882d-clips-730',
'title': 'cherokee - Clips - 730',
},
'playlist_mincount': 15,
}, {
'url': 'https://allstar.gg/u/62b8bdfc9021052f7905882d?view=Montages',
'info_dict': {
'id': '62b8bdfc9021052f7905882d-montages',
'title': 'cherokee - Montages',
},
'playlist_mincount': 4,
}, {
'url': 'https://allstar.gg/profile?user=cherokee&view=Mobile Clips',
'info_dict': {
'id': '62b8bdfc9021052f7905882d-mobile',
'title': 'cherokee - Mobile Clips',
},
'playlist_mincount': 1,
}]
_PAGE_SIZE = 10
def _get_page(self, user_id, display_id, game, query, page_num):
page_num += 1
for video_data in self._call_api(
query, {
'user': user_id,
'page': page_num,
'game': game,
}, ('data', 'videos', 'data'), display_id, f'Downloading page {page_num}'):
yield self._parse_video_data(video_data)
def _real_extract(self, url):
display_id = self._match_id(url)
profile_data = self._download_json(
urljoin('https://api.allstar.gg/v1/users/profile/', display_id), display_id)
user_id = traverse_obj(profile_data, ('data', ('_id'), {str}))
if not user_id:
raise ExtractorError('Unable to extract the user id')
username = traverse_obj(profile_data, ('data', 'profile', ('username'), {str}))
url_query = parse_qs(url)
game = traverse_obj(url_query, ('game', 0, {int_or_none}))
query_id = traverse_obj(url_query, ('view', 0), default='Clips')
if query_id not in ('Clips', 'Montages', 'Mobile Clips'):
raise ExtractorError(f'Unsupported playlist URL type {query_id!r}')
return self.playlist_result(
OnDemandPagedList(
functools.partial(
self._get_page, user_id, display_id, game, _QUERIES.get(query_id)), self._PAGE_SIZE),
playlist_id=join_nonempty(user_id, query_id.lower().split()[0], game),
playlist_title=join_nonempty((username or display_id), query_id, game, delim=' - '))

View File

@@ -1,9 +1,9 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none, parse_iso8601,
parse_duration, parse_duration,
parse_filesize, parse_filesize,
parse_iso8601, int_or_none,
) )
@@ -25,7 +25,7 @@ class AlphaPornoIE(InfoExtractor):
'tbr': 1145, 'tbr': 1145,
'categories': list, 'categories': list,
'age_limit': 18, 'age_limit': 18,
}, }
} }
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -12,7 +12,7 @@ from ..utils import (
class Alsace20TVBaseIE(InfoExtractor): class Alsace20TVBaseIE(InfoExtractor):
def _extract_video(self, video_id, url=None): def _extract_video(self, video_id, url=None):
info = self._download_json( info = self._download_json(
f'https://www.alsace20.tv/visionneuse/visio_v9_js.php?key={video_id}&habillage=0&mode=html', 'https://www.alsace20.tv/visionneuse/visio_v9_js.php?key=%s&habillage=0&mode=html' % (video_id, ),
video_id) or {} video_id) or {}
title = info.get('titre') title = info.get('titre')
@@ -24,9 +24,9 @@ class Alsace20TVBaseIE(InfoExtractor):
else self._extract_mpd_formats(fmt_url, video_id, mpd_id=res, fatal=False)) else self._extract_mpd_formats(fmt_url, video_id, mpd_id=res, fatal=False))
webpage = (url and self._download_webpage(url, video_id, fatal=False)) or '' webpage = (url and self._download_webpage(url, video_id, fatal=False)) or ''
thumbnail = url_or_none(dict_get(info, ('image', 'preview')) or self._og_search_thumbnail(webpage)) thumbnail = url_or_none(dict_get(info, ('image', 'preview', )) or self._og_search_thumbnail(webpage))
upload_date = self._search_regex(r'/(\d{6})_', thumbnail, 'upload_date', default=None) upload_date = self._search_regex(r'/(\d{6})_', thumbnail, 'upload_date', default=None)
upload_date = unified_strdate(f'20{upload_date[:2]}-{upload_date[2:4]}-{upload_date[4:]}') if upload_date else None upload_date = unified_strdate('20%s-%s-%s' % (upload_date[:2], upload_date[2:4], upload_date[4:])) if upload_date else None
return { return {
'id': video_id, 'id': video_id,
'title': title, 'title': title,

View File

@@ -1,104 +0,0 @@
import re
from .archiveorg import ArchiveOrgIE
from .common import InfoExtractor
from ..utils import (
InAdvancePagedList,
clean_html,
int_or_none,
orderedSet,
str_to_int,
urljoin,
)
class AltCensoredIE(InfoExtractor):
IE_NAME = 'altcensored'
_VALID_URL = r'https?://(?:www\.)?altcensored\.com/(?:watch\?v=|embed/)(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.altcensored.com/watch?v=k0srjLSkga8',
'info_dict': {
'id': 'youtube-k0srjLSkga8',
'ext': 'webm',
'title': "QUELLES SONT LES CONSÉQUENCES DE L'HYPERSEXUALISATION DE LA SOCIÉTÉ ?",
'display_id': 'k0srjLSkga8.webm',
'release_date': '20180403',
'creators': ['Virginie Vota'],
'release_year': 2018,
'upload_date': '20230318',
'uploader': 'admin@altcensored.com',
'description': 'md5:0b38a8fc04103579d5c1db10a247dc30',
'timestamp': 1679161343,
'track': 'k0srjLSkga8',
'duration': 926.09,
'thumbnail': 'https://archive.org/download/youtube-k0srjLSkga8/youtube-k0srjLSkga8.thumbs/k0srjLSkga8_000925.jpg',
'view_count': int,
'categories': ['News & Politics'],
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
category = clean_html(self._html_search_regex(
r'<a href="/category/\d+">([^<]+)</a>', webpage, 'category', default=None))
return {
'_type': 'url_transparent',
'url': f'https://archive.org/details/youtube-{video_id}',
'ie_key': ArchiveOrgIE.ie_key(),
'view_count': str_to_int(self._html_search_regex(
r'YouTube Views:(?:\s|&nbsp;)*([\d,]+)', webpage, 'view count', default=None)),
'categories': [category] if category else None,
}
class AltCensoredChannelIE(InfoExtractor):
IE_NAME = 'altcensored:channel'
_VALID_URL = r'https?://(?:www\.)?altcensored\.com/channel/(?!page|table)(?P<id>[^/?#]+)'
_PAGE_SIZE = 24
_TESTS = [{
'url': 'https://www.altcensored.com/channel/UCFPTO55xxHqFqkzRZHu4kcw',
'info_dict': {
'title': 'Virginie Vota',
'id': 'UCFPTO55xxHqFqkzRZHu4kcw',
},
'playlist_count': 85,
}, {
'url': 'https://altcensored.com/channel/UC9CcJ96HKMWn0LZlcxlpFTw',
'info_dict': {
'title': 'yukikaze775',
'id': 'UC9CcJ96HKMWn0LZlcxlpFTw',
},
'playlist_count': 4,
}, {
'url': 'https://altcensored.com/channel/UCfYbb7nga6-icsFWWgS-kWw',
'info_dict': {
'title': 'Mister Metokur',
'id': 'UCfYbb7nga6-icsFWWgS-kWw',
},
'playlist_count': 121,
}]
def _real_extract(self, url):
channel_id = self._match_id(url)
webpage = self._download_webpage(
url, channel_id, 'Download channel webpage', 'Unable to get channel webpage')
title = self._html_search_meta('altcen_title', webpage, 'title', fatal=False)
page_count = int_or_none(self._html_search_regex(
r'<a[^>]+href="/channel/[\w-]+/page/(\d+)">(?:\1)</a>',
webpage, 'page count', default='1'))
def page_func(page_num):
page_num += 1
webpage = self._download_webpage(
f'https://altcensored.com/channel/{channel_id}/page/{page_num}',
channel_id, note=f'Downloading page {page_num}')
items = re.findall(r'<a[^>]+href="(/watch\?v=[^"]+)', webpage)
return [self.url_result(urljoin('https://www.altcensored.com', path), AltCensoredIE)
for path in orderedSet(items)]
return self.playlist_result(
InAdvancePagedList(page_func, page_count, self._PAGE_SIZE),
playlist_id=channel_id, playlist_title=title)

View File

@@ -1,13 +1,17 @@
import re import re
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import (
compat_urlparse,
)
from ..utils import ( from ..utils import (
ExtractorError,
clean_html,
int_or_none,
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
int_or_none,
clean_html,
ExtractorError
) )
@@ -21,7 +25,7 @@ class AluraIE(InfoExtractor):
'info_dict': { 'info_dict': {
'id': '60095', 'id': '60095',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Referências, ref-set e alter', 'title': 'Referências, ref-set e alter'
}, },
'skip': 'Requires alura account credentials'}, 'skip': 'Requires alura account credentials'},
{ {
@@ -30,12 +34,12 @@ class AluraIE(InfoExtractor):
'only_matching': True}, 'only_matching': True},
{ {
'url': 'https://cursos.alura.com.br/course/fundamentos-market-digital/task/55219', 'url': 'https://cursos.alura.com.br/course/fundamentos-market-digital/task/55219',
'only_matching': True}, 'only_matching': True}
] ]
def _real_extract(self, url): def _real_extract(self, url):
course, video_id = self._match_valid_url(url).group('course_name', 'id') course, video_id = self._match_valid_url(url)
video_url = self._VIDEO_URL % (course, video_id) video_url = self._VIDEO_URL % (course, video_id)
video_dict = self._download_json(video_url, video_id, 'Searching for videos') video_dict = self._download_json(video_url, video_id, 'Searching for videos')
@@ -48,7 +52,7 @@ class AluraIE(InfoExtractor):
formats = [] formats = []
for video_obj in video_dict: for video_obj in video_dict:
video_url_m3u8 = video_obj.get('mp4') video_url_m3u8 = video_obj.get('link')
video_format = self._extract_m3u8_formats( video_format = self._extract_m3u8_formats(
video_url_m3u8, None, 'mp4', entry_protocol='m3u8_native', video_url_m3u8, None, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False) m3u8_id='hls', fatal=False)
@@ -62,7 +66,7 @@ class AluraIE(InfoExtractor):
return { return {
'id': video_id, 'id': video_id,
'title': video_title, 'title': video_title,
'formats': formats, "formats": formats
} }
def _perform_login(self, username, password): def _perform_login(self, username, password):
@@ -91,7 +95,7 @@ class AluraIE(InfoExtractor):
'post url', default=self._LOGIN_URL, group='url') 'post url', default=self._LOGIN_URL, group='url')
if not post_url.startswith('http'): if not post_url.startswith('http'):
post_url = urllib.parse.urljoin(self._LOGIN_URL, post_url) post_url = compat_urlparse.urljoin(self._LOGIN_URL, post_url)
response = self._download_webpage( response = self._download_webpage(
post_url, None, 'Logging in', post_url, None, 'Logging in',
@@ -103,7 +107,7 @@ class AluraIE(InfoExtractor):
r'(?s)<p[^>]+class="alert-message[^"]*">(.+?)</p>', r'(?s)<p[^>]+class="alert-message[^"]*">(.+?)</p>',
response, 'error message', default=None) response, 'error message', default=None)
if error: if error:
raise ExtractorError(f'Unable to login: {error}', expected=True) raise ExtractorError('Unable to login: %s' % error, expected=True)
raise ExtractorError('Unable to log in') raise ExtractorError('Unable to log in')
@@ -119,7 +123,7 @@ class AluraCourseIE(AluraIE): # XXX: Do not subclass from concrete IE
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
return False if AluraIE.suitable(url) else super().suitable(url) return False if AluraIE.suitable(url) else super(AluraCourseIE, cls).suitable(url)
def _real_extract(self, url): def _real_extract(self, url):
@@ -157,7 +161,7 @@ class AluraCourseIE(AluraIE): # XXX: Do not subclass from concrete IE
'url': video_url, 'url': video_url,
'id_key': self.ie_key(), 'id_key': self.ie_key(),
'chapter': chapter, 'chapter': chapter,
'chapter_number': chapter_number, 'chapter_number': chapter_number
} }
entries.append(entry) entries.append(entry)
return self.playlist_result(entries, course_path, course_title) return self.playlist_result(entries, course_path, course_title)

View File

@@ -1,77 +0,0 @@
from .common import InfoExtractor
from ..utils import (
ExtractorError,
float_or_none,
int_or_none,
parse_iso8601,
url_or_none,
)
from ..utils.traversal import traverse_obj
class AmadeusTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?amadeus\.tv/library/(?P<id>[\da-f]+)'
_TESTS = [{
'url': 'http://www.amadeus.tv/library/65091a87ff85af59d9fc54c3',
'info_dict': {
'id': '5576678021301411311',
'ext': 'mp4',
'title': 'Jieon Park - 第五届珠海莫扎特国际青少年音乐周小提琴C组第三轮',
'thumbnail': 'http://1253584441.vod2.myqcloud.com/a0046a27vodtransbj1253584441/7db4af535576678021301411311/coverBySnapshot_10_0.jpg',
'duration': 1264.8,
'upload_date': '20230918',
'timestamp': 1695034800,
'display_id': '65091a87ff85af59d9fc54c3',
'view_count': int,
'description': 'md5:a0357b9c215489e2067cbae0b777bb95',
},
}]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
nuxt_data = self._search_nuxt_data(webpage, display_id, traverse=('fetch', '0'))
video_id = traverse_obj(nuxt_data, ('item', 'video', {str}))
if not video_id:
raise ExtractorError('Unable to extract actual video ID')
video_data = self._download_json(
f'http://playvideo.qcloud.com/getplayinfo/v2/1253584441/{video_id}',
video_id, headers={'Referer': 'http://www.amadeus.tv/'})
formats = []
for video in traverse_obj(video_data, ('videoInfo', ('sourceVideo', ('transcodeList', ...)), {dict})):
if not url_or_none(video.get('url')):
continue
formats.append({
**traverse_obj(video, {
'url': 'url',
'format_id': ('definition', {lambda x: f'http-{x or "0"}'}),
'width': ('width', {int_or_none}),
'height': ('height', {int_or_none}),
'filesize': (('totalSize', 'size'), {int_or_none}),
'vcodec': ('videoStreamList', 0, 'codec'),
'acodec': ('audioStreamList', 0, 'codec'),
'fps': ('videoStreamList', 0, 'fps', {float_or_none}),
}, get_all=False),
'http_headers': {'Referer': 'http://www.amadeus.tv/'},
})
return {
'id': video_id,
'display_id': display_id,
'formats': formats,
**traverse_obj(video_data, {
'title': ('videoInfo', 'basicInfo', 'name', {str}),
'thumbnail': ('coverInfo', 'coverUrl', {url_or_none}),
'duration': ('videoInfo', 'sourceVideo', ('floatDuration', 'duration'), {float_or_none}),
}, get_all=False),
**traverse_obj(nuxt_data, ('item', {
'title': (('title', 'title_en', 'title_cn'), {str}),
'description': (('description', 'description_en', 'description_cn'), {str}),
'timestamp': ('date', {parse_iso8601}),
'view_count': ('view', {int_or_none}),
}), get_all=False),
}

View File

@@ -1,6 +1,6 @@
from .common import InfoExtractor from .common import InfoExtractor
from .vimeo import VimeoIE
from .youtube import YoutubeIE from .youtube import YoutubeIE
from .vimeo import VimeoIE
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
parse_iso8601, parse_iso8601,
@@ -25,7 +25,7 @@ class AmaraIE(InfoExtractor):
'uploader': 'PBS NewsHour', 'uploader': 'PBS NewsHour',
'uploader_id': 'PBSNewsHour', 'uploader_id': 'PBSNewsHour',
'timestamp': 1549639570, 'timestamp': 1549639570,
}, }
}, { }, {
# Vimeo # Vimeo
'url': 'https://amara.org/en/videos/kYkK1VUTWW5I/info/vimeo-at-ces-2011', 'url': 'https://amara.org/en/videos/kYkK1VUTWW5I/info/vimeo-at-ces-2011',
@@ -40,8 +40,8 @@ class AmaraIE(InfoExtractor):
'timestamp': 1294763658, 'timestamp': 1294763658,
'upload_date': '20110111', 'upload_date': '20110111',
'uploader': 'Sam Morrill', 'uploader': 'Sam Morrill',
'uploader_id': 'sammorrill', 'uploader_id': 'sammorrill'
}, }
}, { }, {
# Direct Link # Direct Link
'url': 'https://amara.org/en/videos/s8KL7I3jLmh6/info/the-danger-of-a-single-story/', 'url': 'https://amara.org/en/videos/s8KL7I3jLmh6/info/the-danger-of-a-single-story/',
@@ -55,13 +55,13 @@ class AmaraIE(InfoExtractor):
'subtitles': dict, 'subtitles': dict,
'upload_date': '20091007', 'upload_date': '20091007',
'timestamp': 1254942511, 'timestamp': 1254942511,
}, }
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
meta = self._download_json( meta = self._download_json(
f'https://amara.org/api/videos/{video_id}/', 'https://amara.org/api/videos/%s/' % video_id,
video_id, query={'format': 'json'}) video_id, query={'format': 'json'})
title = meta['title'] title = meta['title']
video_url = meta['all_urls'][0] video_url = meta['all_urls'][0]

View File

@@ -61,13 +61,13 @@ class AmazonStoreIE(InfoExtractor):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
playlist_id = self._match_id(url) id = self._match_id(url)
for retry in self.RetryManager(): for retry in self.RetryManager():
webpage = self._download_webpage(url, playlist_id) webpage = self._download_webpage(url, id)
try: try:
data_json = self._search_json( data_json = self._search_json(
r'var\s?obj\s?=\s?jQuery\.parseJSON\(\'', webpage, 'data', playlist_id, r'var\s?obj\s?=\s?jQuery\.parseJSON\(\'', webpage, 'data', id,
transform_source=js_to_json) transform_source=js_to_json)
except ExtractorError as e: except ExtractorError as e:
retry.error = e retry.error = e
@@ -81,7 +81,7 @@ class AmazonStoreIE(InfoExtractor):
'height': int_or_none(video.get('videoHeight')), 'height': int_or_none(video.get('videoHeight')),
'width': int_or_none(video.get('videoWidth')), 'width': int_or_none(video.get('videoWidth')),
} for video in (data_json.get('videos') or []) if video.get('isVideo') and video.get('url')] } for video in (data_json.get('videos') or []) if video.get('isVideo') and video.get('url')]
return self.playlist_result(entries, playlist_id=playlist_id, playlist_title=data_json.get('title')) return self.playlist_result(entries, playlist_id=id, playlist_title=data_json.get('title'))
class AmazonReviewsIE(InfoExtractor): class AmazonReviewsIE(InfoExtractor):

View File

@@ -22,11 +22,8 @@ class AmazonMiniTVBaseIE(InfoExtractor):
resp = self._download_json( resp = self._download_json(
f'https://www.amazon.in/minitv/api/web/{"graphql" if data else "prs"}', f'https://www.amazon.in/minitv/api/web/{"graphql" if data else "prs"}',
asin, note=note, headers={ asin, note=note, headers={'Content-Type': 'application/json'},
'Content-Type': 'application/json', data=json.dumps(data).encode() if data else None,
'currentpageurl': '/',
'currentplatform': 'dWeb',
}, data=json.dumps(data).encode() if data else None,
query=None if data else { query=None if data else {
'deviceType': 'A1WMMUXPCUJL4N', 'deviceType': 'A1WMMUXPCUJL4N',
'contentId': asin, 'contentId': asin,
@@ -49,7 +46,7 @@ class AmazonMiniTVIE(AmazonMiniTVBaseIE):
'ext': 'mp4', 'ext': 'mp4',
'title': 'May I Kiss You?', 'title': 'May I Kiss You?',
'language': 'Hindi', 'language': 'Hindi',
'thumbnail': r're:^https?://.*\.(?:jpg|png)$', 'thumbnail': r're:^https?://.*\.jpg$',
'description': 'md5:a549bfc747973e04feb707833474e59d', 'description': 'md5:a549bfc747973e04feb707833474e59d',
'release_timestamp': 1644710400, 'release_timestamp': 1644710400,
'release_date': '20220213', 'release_date': '20220213',
@@ -71,7 +68,7 @@ class AmazonMiniTVIE(AmazonMiniTVBaseIE):
'ext': 'mp4', 'ext': 'mp4',
'title': 'Jahaan', 'title': 'Jahaan',
'language': 'Hindi', 'language': 'Hindi',
'thumbnail': r're:^https?://.*\.(?:jpg|png)', 'thumbnail': r're:^https?://.*\.jpg',
'description': 'md5:05eb765a77bf703f322f120ec6867339', 'description': 'md5:05eb765a77bf703f322f120ec6867339',
'release_timestamp': 1647475200, 'release_timestamp': 1647475200,
'release_date': '20220317', 'release_date': '20220317',

View File

@@ -26,7 +26,6 @@ class AMCNetworksIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
}, },
'skip': '404 Not Found',
}, { }, {
'url': 'http://www.bbcamerica.com/shows/the-hunt/full-episodes/season-1/episode-01-the-hardest-challenge', 'url': 'http://www.bbcamerica.com/shows/the-hunt/full-episodes/season-1/episode-01-the-hardest-challenge',
'only_matching': True, 'only_matching': True,
@@ -64,8 +63,8 @@ class AMCNetworksIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
site, display_id = self._match_valid_url(url).groups() site, display_id = self._match_valid_url(url).groups()
requestor_id = self._REQUESTOR_ID_MAP[site] requestor_id = self._REQUESTOR_ID_MAP[site]
page_data = self._download_json( page_data = self._download_json(
f'https://content-delivery-gw.svc.ds.amcn.com/api/v2/content/amcn/{requestor_id.lower()}/url/{display_id}', 'https://content-delivery-gw.svc.ds.amcn.com/api/v2/content/amcn/%s/url/%s'
display_id)['data'] % (requestor_id.lower(), display_id), display_id)['data']
properties = page_data.get('properties') or {} properties = page_data.get('properties') or {}
query = { query = {
'mbr': 'true', 'mbr': 'true',
@@ -76,15 +75,15 @@ class AMCNetworksIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
try: try:
for v in page_data['children']: for v in page_data['children']:
if v.get('type') == 'video-player': if v.get('type') == 'video-player':
release_pid = v['properties']['currentVideo']['meta']['releasePid'] releasePid = v['properties']['currentVideo']['meta']['releasePid']
tp_path = 'M_UwQC/' + release_pid tp_path = 'M_UwQC/' + releasePid
media_url = 'https://link.theplatform.com/s/' + tp_path media_url = 'https://link.theplatform.com/s/' + tp_path
video_player_count += 1 video_player_count += 1
except KeyError: except KeyError:
pass pass
if video_player_count > 1: if video_player_count > 1:
self.report_warning( self.report_warning(
f'The JSON data has {video_player_count} video players. Only one will be extracted') 'The JSON data has %d video players. Only one will be extracted' % video_player_count)
# Fall back to videoPid if releasePid not found. # Fall back to videoPid if releasePid not found.
# TODO: Fall back to videoPid if releasePid manifest uses DRM. # TODO: Fall back to videoPid if releasePid manifest uses DRM.
@@ -131,7 +130,7 @@ class AMCNetworksIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
}) })
ns_keys = theplatform_metadata.get('$xmlns', {}).keys() ns_keys = theplatform_metadata.get('$xmlns', {}).keys()
if ns_keys: if ns_keys:
ns = next(iter(ns_keys)) ns = list(ns_keys)[0]
episode = theplatform_metadata.get(ns + '$episodeTitle') or None episode = theplatform_metadata.get(ns + '$episodeTitle') or None
episode_number = int_or_none( episode_number = int_or_none(
theplatform_metadata.get(ns + '$episode')) theplatform_metadata.get(ns + '$episode'))

View File

@@ -87,13 +87,13 @@ class AmericasTestKitchenIE(InfoExtractor):
resource_type = 'episodes' resource_type = 'episodes'
resource = self._download_json( resource = self._download_json(
f'https://www.americastestkitchen.com/api/v6/{resource_type}/{video_id}', video_id) 'https://www.americastestkitchen.com/api/v6/%s/%s' % (resource_type, video_id), video_id)
video = resource['video'] if is_episode else resource video = resource['video'] if is_episode else resource
episode = resource if is_episode else resource.get('episode') or {} episode = resource if is_episode else resource.get('episode') or {}
return { return {
'_type': 'url_transparent', '_type': 'url_transparent',
'url': 'https://player.zype.com/embed/{}.js?api_key=jZ9GUhRmxcPvX7M3SlfejB6Hle9jyHTdk2jVxG7wOHPLODgncEKVdPYBhuz9iWXQ'.format(video['zypeId']), 'url': 'https://player.zype.com/embed/%s.js?api_key=jZ9GUhRmxcPvX7M3SlfejB6Hle9jyHTdk2jVxG7wOHPLODgncEKVdPYBhuz9iWXQ' % video['zypeId'],
'ie_key': 'Zype', 'ie_key': 'Zype',
'description': clean_html(video.get('description')), 'description': clean_html(video.get('description')),
'timestamp': unified_timestamp(video.get('publishDate')), 'timestamp': unified_timestamp(video.get('publishDate')),
@@ -174,22 +174,22 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
] ]
if season_number: if season_number:
playlist_id = f'season_{season_number}' playlist_id = 'season_%d' % season_number
playlist_title = f'Season {season_number}' playlist_title = 'Season %d' % season_number
facet_filters.append('search_season_list:' + playlist_title) facet_filters.append('search_season_list:' + playlist_title)
else: else:
playlist_id = show playlist_id = show
playlist_title = title playlist_title = title
season_search = self._download_json( season_search = self._download_json(
f'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_{slug}_season_desc_production', 'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug,
playlist_id, headers={ playlist_id, headers={
'Origin': 'https://www.americastestkitchen.com', 'Origin': 'https://www.americastestkitchen.com',
'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805', 'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805',
'X-Algolia-Application-Id': 'Y1FNZXUI30', 'X-Algolia-Application-Id': 'Y1FNZXUI30',
}, query={ }, query={
'facetFilters': json.dumps(facet_filters), 'facetFilters': json.dumps(facet_filters),
'attributesToRetrieve': f'description,search_{slug}_episode_number,search_document_date,search_url,title,search_atk_episode_season', 'attributesToRetrieve': 'description,search_%s_episode_number,search_document_date,search_url,title,search_atk_episode_season' % slug,
'attributesToHighlight': '', 'attributesToHighlight': '',
'hitsPerPage': 1000, 'hitsPerPage': 1000,
}) })
@@ -207,7 +207,7 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
'description': episode.get('description'), 'description': episode.get('description'),
'timestamp': unified_timestamp(episode.get('search_document_date')), 'timestamp': unified_timestamp(episode.get('search_document_date')),
'season_number': season_number, 'season_number': season_number,
'episode_number': int_or_none(episode.get(f'search_{slug}_episode_number')), 'episode_number': int_or_none(episode.get('search_%s_episode_number' % slug)),
'ie_key': AmericasTestKitchenIE.ie_key(), 'ie_key': AmericasTestKitchenIE.ie_key(),
} }

View File

@@ -1,7 +1,7 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError,
determine_ext, determine_ext,
ExtractorError,
int_or_none, int_or_none,
mimetype2ext, mimetype2ext,
parse_iso8601, parse_iso8601,
@@ -19,12 +19,12 @@ class AMPIE(InfoExtractor): # XXX: Conventionally, base classes should end with
'Unable to download Akamai AMP feed', transform_source=strip_jsonp) 'Unable to download Akamai AMP feed', transform_source=strip_jsonp)
item = feed.get('channel', {}).get('item') item = feed.get('channel', {}).get('item')
if not item: if not item:
raise ExtractorError('{} said: {}'.format(self.IE_NAME, feed['error'])) raise ExtractorError('%s said: %s' % (self.IE_NAME, feed['error']))
video_id = item['guid'] video_id = item['guid']
def get_media_node(name, default=None): def get_media_node(name, default=None):
media_name = f'media-{name}' media_name = 'media-%s' % name
media_group = item.get('media-group') or item media_group = item.get('media-group') or item
return media_group.get(media_name) or item.get(media_name) or item.get(name, default) return media_group.get(media_name) or item.get(media_name) or item.get(name, default)

View File

@@ -5,7 +5,7 @@ from ..utils import (
int_or_none, int_or_none,
str_or_none, str_or_none,
traverse_obj, traverse_obj,
unified_timestamp, unified_timestamp
) )
@@ -29,7 +29,7 @@ class AnchorFMEpisodeIE(InfoExtractor):
'release_date': '20230121', 'release_date': '20230121',
'release_timestamp': 1674285179, 'release_timestamp': 1674285179,
'episode_id': 'e1tpt3d', 'episode_id': 'e1tpt3d',
}, }
}, { }, {
# embed url # embed url
'url': 'https://anchor.fm/apakatatempo/embed/episodes/S2E75-Perang-Bintang-di-Balik-Kasus-Ferdy-Sambo-dan-Ismail-Bolong-e1shjqd', 'url': 'https://anchor.fm/apakatatempo/embed/episodes/S2E75-Perang-Bintang-di-Balik-Kasus-Ferdy-Sambo-dan-Ismail-Bolong-e1shjqd',
@@ -50,7 +50,7 @@ class AnchorFMEpisodeIE(InfoExtractor):
'season': 'Season 2', 'season': 'Season 2',
'season_number': 2, 'season_number': 2,
'episode_id': 'e1shjqd', 'episode_id': 'e1shjqd',
}, }
}] }]
_WEBPAGE_TESTS = [{ _WEBPAGE_TESTS = [{
@@ -72,7 +72,7 @@ class AnchorFMEpisodeIE(InfoExtractor):
'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_episode400/2627805/2627805-1671590688729-4db3882ac9e4b.jpg', 'thumbnail': 'https://s3-us-west-2.amazonaws.com/anchor-generated-image-bank/production/podcast_uploaded_episode400/2627805/2627805-1671590688729-4db3882ac9e4b.jpg',
'uploader': 'Podcast Tempo', 'uploader': 'Podcast Tempo',
'channel': 'apakatatempo', 'channel': 'apakatatempo',
}, }
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -1,7 +1,7 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import merge_dicts, url_or_none from ..utils import url_or_none, merge_dicts
class AngelIE(InfoExtractor): class AngelIE(InfoExtractor):
@@ -15,8 +15,8 @@ class AngelIE(InfoExtractor):
'title': 'Tuttle Twins Season 1, Episode 1: When Laws Give You Lemons', 'title': 'Tuttle Twins Season 1, Episode 1: When Laws Give You Lemons',
'description': 'md5:73b704897c20ab59c433a9c0a8202d5e', 'description': 'md5:73b704897c20ab59c433a9c0a8202d5e',
'thumbnail': r're:^https?://images.angelstudios.com/image/upload/angel-app/.*$', 'thumbnail': r're:^https?://images.angelstudios.com/image/upload/angel-app/.*$',
'duration': 1359.0, 'duration': 1359.0
}, }
}, { }, {
'url': 'https://www.angel.com/watch/the-chosen/episode/8dfb714d-bca5-4812-8125-24fb9514cd10/season-1/episode-1/i-have-called-you-by-name', 'url': 'https://www.angel.com/watch/the-chosen/episode/8dfb714d-bca5-4812-8125-24fb9514cd10/season-1/episode-1/i-have-called-you-by-name',
'md5': 'e4774bad0a5f0ad2e90d175cafdb797d', 'md5': 'e4774bad0a5f0ad2e90d175cafdb797d',
@@ -26,8 +26,8 @@ class AngelIE(InfoExtractor):
'title': 'The Chosen Season 1, Episode 1: I Have Called You By Name', 'title': 'The Chosen Season 1, Episode 1: I Have Called You By Name',
'description': 'md5:aadfb4827a94415de5ff6426e6dee3be', 'description': 'md5:aadfb4827a94415de5ff6426e6dee3be',
'thumbnail': r're:^https?://images.angelstudios.com/image/upload/angel-app/.*$', 'thumbnail': r're:^https?://images.angelstudios.com/image/upload/angel-app/.*$',
'duration': 3276.0, 'duration': 3276.0
}, }
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -44,7 +44,7 @@ class AngelIE(InfoExtractor):
'title': self._og_search_title(webpage), 'title': self._og_search_title(webpage),
'description': self._og_search_description(webpage), 'description': self._og_search_description(webpage),
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles
} }
# Angel uses cloudinary in the background and supports image transformations. # Angel uses cloudinary in the background and supports image transformations.

View File

@@ -5,26 +5,22 @@ from ..networking import HEADRequest
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
determine_ext, determine_ext,
make_archive_id,
scale_thumbnails_to_max_format_width, scale_thumbnails_to_max_format_width,
) )
class AntennaBaseIE(InfoExtractor): class Ant1NewsGrBaseIE(InfoExtractor):
def _download_and_extract_api_data(self, video_id, netloc, cid=None): def _download_and_extract_api_data(self, video_id, netloc, cid=None):
info = self._download_json(f'{self.http_scheme()}//{netloc}{self._API_PATH}', url = f'{self.http_scheme()}//{netloc}{self._API_PATH}'
video_id, query={'cid': cid or video_id}) info = self._download_json(url, video_id, query={'cid': cid or video_id})
if not info.get('url'): try:
raise ExtractorError(f'No source found for {video_id}') source = info['url']
except KeyError:
ext = determine_ext(info['url']) raise ExtractorError('no source found for %s' % video_id)
if ext == 'm3u8': formats, subs = (self._extract_m3u8_formats_and_subtitles(source, video_id, 'mp4')
formats, subs = self._extract_m3u8_formats_and_subtitles(info['url'], video_id, 'mp4') if determine_ext(source) == 'm3u8' else ([{'url': source}], {}))
else:
formats, subs = [{'url': info['url'], 'format_id': ext}], {}
thumbnails = scale_thumbnails_to_max_format_width( thumbnails = scale_thumbnails_to_max_format_width(
formats, [{'url': info['thumb']}], r'(?<=/imgHandler/)\d+') if info.get('thumb') else [] formats, [{'url': info['thumb']}], r'(?<=/imgHandler/)\d+')
return { return {
'id': video_id, 'id': video_id,
'title': info.get('title'), 'title': info.get('title'),
@@ -34,31 +30,21 @@ class AntennaBaseIE(InfoExtractor):
} }
class AntennaGrWatchIE(AntennaBaseIE): class Ant1NewsGrWatchIE(Ant1NewsGrBaseIE):
IE_NAME = 'antenna:watch' IE_NAME = 'ant1newsgr:watch'
IE_DESC = 'antenna.gr and ant1news.gr videos' IE_DESC = 'ant1news.gr videos'
_VALID_URL = r'https?://(?P<netloc>(?:www\.)?(?:antenna|ant1news)\.gr)/watch/(?P<id>\d+)/' _VALID_URL = r'https?://(?P<netloc>(?:www\.)?ant1news\.gr)/watch/(?P<id>\d+)/'
_API_PATH = '/templates/data/player' _API_PATH = '/templates/data/player'
_TESTS = [{ _TESTS = [{
'url': 'https://www.ant1news.gr/watch/1506168/ant1-news-09112021-stis-18-45', 'url': 'https://www.ant1news.gr/watch/1506168/ant1-news-09112021-stis-18-45',
'md5': 'c472d9dd7cd233c63aff2ea42201cda6', 'md5': '95925e6b32106754235f2417e0d2dfab',
'info_dict': { 'info_dict': {
'id': '1506168', 'id': '1506168',
'ext': 'mp4', 'ext': 'mp4',
'title': 'md5:0ad00fa66ecf8aa233d26ab0dba7514a', 'title': 'md5:0ad00fa66ecf8aa233d26ab0dba7514a',
'description': 'md5:18665af715a6dcfeac1d6153a44f16b0', 'description': 'md5:18665af715a6dcfeac1d6153a44f16b0',
'thumbnail': r're:https://ant1media\.azureedge\.net/imgHandler/\d+/26d46bf6-8158-4f02-b197-7096c714b2de\.jpg', 'thumbnail': 'https://ant1media.azureedge.net/imgHandler/640/26d46bf6-8158-4f02-b197-7096c714b2de.jpg',
},
}, {
'url': 'https://www.antenna.gr/watch/1643812/oi-prodotes-epeisodio-01',
'md5': '8f6f7dd3b1dba4d835ba990e25f31243',
'info_dict': {
'id': '1643812',
'ext': 'mp4',
'format_id': 'mp4',
'title': 'ΟΙ ΠΡΟΔΟΤΕΣ ΕΠΕΙΣΟΔΙΟ 01',
'thumbnail': r're:https://ant1media\.azureedge\.net/imgHandler/\d+/b3d63096-e72d-43c4-87a0-00d4363d242f\.jpg',
}, },
}] }]
@@ -66,26 +52,25 @@ class AntennaGrWatchIE(AntennaBaseIE):
video_id, netloc = self._match_valid_url(url).group('id', 'netloc') video_id, netloc = self._match_valid_url(url).group('id', 'netloc')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
info = self._download_and_extract_api_data(video_id, netloc) info = self._download_and_extract_api_data(video_id, netloc)
info['description'] = self._og_search_description(webpage, default=None) info['description'] = self._og_search_description(webpage)
info['_old_archive_ids'] = [make_archive_id('Ant1NewsGrWatch', video_id)]
return info return info
class Ant1NewsGrArticleIE(AntennaBaseIE): class Ant1NewsGrArticleIE(Ant1NewsGrBaseIE):
IE_NAME = 'ant1newsgr:article' IE_NAME = 'ant1newsgr:article'
IE_DESC = 'ant1news.gr articles' IE_DESC = 'ant1news.gr articles'
_VALID_URL = r'https?://(?:www\.)?ant1news\.gr/[^/]+/article/(?P<id>\d+)/' _VALID_URL = r'https?://(?:www\.)?ant1news\.gr/[^/]+/article/(?P<id>\d+)/'
_TESTS = [{ _TESTS = [{
'url': 'https://www.ant1news.gr/afieromata/article/549468/o-tzeims-mpont-sta-meteora-oi-apeiles-kai-o-xesikomos-ton-kalogeron', 'url': 'https://www.ant1news.gr/afieromata/article/549468/o-tzeims-mpont-sta-meteora-oi-apeiles-kai-o-xesikomos-ton-kalogeron',
'md5': '57eb8d12181f0fa2b14b0b138e1de9b6', 'md5': '294f18331bb516539d72d85a82887dcc',
'info_dict': { 'info_dict': {
'id': '_xvg/m_cmbatw=', 'id': '_xvg/m_cmbatw=',
'ext': 'mp4', 'ext': 'mp4',
'title': 'md5:a93e8ecf2e4073bfdffcb38f59945411', 'title': 'md5:a93e8ecf2e4073bfdffcb38f59945411',
'timestamp': 1666166520, 'timestamp': 1603092840,
'upload_date': '20221019', 'upload_date': '20201019',
'thumbnail': 'https://ant1media.azureedge.net/imgHandler/1920/756206d2-d640-40e2-b201-3555abdfc0db.jpg', 'thumbnail': 'https://ant1media.azureedge.net/imgHandler/640/756206d2-d640-40e2-b201-3555abdfc0db.jpg',
}, },
}, { }, {
'url': 'https://ant1news.gr/Society/article/620286/symmoria-anilikon-dikigoros-thymaton-ithelan-na-toys-apoteleiosoyn', 'url': 'https://ant1news.gr/Society/article/620286/symmoria-anilikon-dikigoros-thymaton-ithelan-na-toys-apoteleiosoyn',
@@ -105,19 +90,19 @@ class Ant1NewsGrArticleIE(AntennaBaseIE):
info = self._search_json_ld(webpage, video_id, expected_type='NewsArticle') info = self._search_json_ld(webpage, video_id, expected_type='NewsArticle')
embed_urls = list(Ant1NewsGrEmbedIE._extract_embed_urls(url, webpage)) embed_urls = list(Ant1NewsGrEmbedIE._extract_embed_urls(url, webpage))
if not embed_urls: if not embed_urls:
raise ExtractorError(f'no videos found for {video_id}', expected=True) raise ExtractorError('no videos found for %s' % video_id, expected=True)
return self.playlist_from_matches( return self.playlist_from_matches(
embed_urls, video_id, info.get('title'), ie=Ant1NewsGrEmbedIE.ie_key(), embed_urls, video_id, info.get('title'), ie=Ant1NewsGrEmbedIE.ie_key(),
video_kwargs={'url_transparent': True, 'timestamp': info.get('timestamp')}) video_kwargs={'url_transparent': True, 'timestamp': info.get('timestamp')})
class Ant1NewsGrEmbedIE(AntennaBaseIE): class Ant1NewsGrEmbedIE(Ant1NewsGrBaseIE):
IE_NAME = 'ant1newsgr:embed' IE_NAME = 'ant1newsgr:embed'
IE_DESC = 'ant1news.gr embedded videos' IE_DESC = 'ant1news.gr embedded videos'
_BASE_PLAYER_URL_RE = r'(?:https?:)?//(?:[a-zA-Z0-9\-]+\.)?(?:antenna|ant1news)\.gr/templates/pages/player' _BASE_PLAYER_URL_RE = r'(?:https?:)?//(?:[a-zA-Z0-9\-]+\.)?(?:antenna|ant1news)\.gr/templates/pages/player'
_VALID_URL = rf'{_BASE_PLAYER_URL_RE}\?([^#]+&)?cid=(?P<id>[^#&]+)' _VALID_URL = rf'{_BASE_PLAYER_URL_RE}\?([^#]+&)?cid=(?P<id>[^#&]+)'
_EMBED_REGEX = [rf'<iframe[^>]+?src=(?P<_q1>["\'])(?P<url>{_BASE_PLAYER_URL_RE}\?(?:(?!(?P=_q1)).)+)(?P=_q1)'] _EMBED_REGEX = [rf'<iframe[^>]+?src=(?P<_q1>["\'])(?P<url>{_BASE_PLAYER_URL_RE}\?(?:(?!(?P=_q1)).)+)(?P=_q1)']
_API_PATH = '/templates/data/jsonPlayer' _API_PATH = '/news/templates/data/jsonPlayer'
_TESTS = [{ _TESTS = [{
'url': 'https://www.antenna.gr/templates/pages/player?cid=3f_li_c_az_jw_y_u=&w=670&h=377', 'url': 'https://www.antenna.gr/templates/pages/player?cid=3f_li_c_az_jw_y_u=&w=670&h=377',

View File

@@ -8,8 +8,10 @@ import time
from .common import InfoExtractor from .common import InfoExtractor
from ..aes import aes_encrypt from ..aes import aes_encrypt
from ..utils import ( from ..utils import (
bytes_to_intlist,
determine_ext, determine_ext,
int_or_none, int_or_none,
intlist_to_bytes,
join_nonempty, join_nonempty,
smuggle_url, smuggle_url,
strip_jsonp, strip_jsonp,
@@ -31,6 +33,24 @@ class AnvatoIE(InfoExtractor):
_AUTH_KEY = b'\x31\xc2\x42\x84\x9e\x73\xa0\xce' # from anvplayer.min.js _AUTH_KEY = b'\x31\xc2\x42\x84\x9e\x73\xa0\xce' # from anvplayer.min.js
_TESTS = [{ _TESTS = [{
# from https://www.nfl.com/videos/baker-mayfield-s-game-changing-plays-from-3-td-game-week-14
'url': 'anvato:GXvEgwyJeWem8KCYXfeoHWknwP48Mboj:899441',
'md5': '921919dab3cd0b849ff3d624831ae3e2',
'info_dict': {
'id': '899441',
'ext': 'mp4',
'title': 'Baker Mayfield\'s game-changing plays from 3-TD game Week 14',
'description': 'md5:85e05a3cc163f8c344340f220521136d',
'upload_date': '20201215',
'timestamp': 1608009755,
'thumbnail': r're:^https?://.*\.jpg',
'uploader': 'NFL',
'tags': ['Baltimore Ravens at Cleveland Browns (2020-REG-14)', 'Baker Mayfield', 'Game Highlights',
'Player Highlights', 'Cleveland Browns', 'league'],
'duration': 157,
'categories': ['Entertainment', 'Game', 'Highlights'],
},
}, {
# from https://ktla.com/news/99-year-old-woman-learns-to-fly-in-torrance-checks-off-bucket-list-dream/ # from https://ktla.com/news/99-year-old-woman-learns-to-fly-in-torrance-checks-off-bucket-list-dream/
'url': 'anvato:X8POa4zpGZMmeiq0wqiO8IP5rMqQM9VN:8032455', 'url': 'anvato:X8POa4zpGZMmeiq0wqiO8IP5rMqQM9VN:8032455',
'md5': '837718bcfb3a7778d022f857f7a9b19e', 'md5': '837718bcfb3a7778d022f857f7a9b19e',
@@ -218,7 +238,32 @@ class AnvatoIE(InfoExtractor):
'gray': 'anvato_mcp_gray_web_prod_4c10f067c393ed8fc453d3930f8ab2b159973900', 'gray': 'anvato_mcp_gray_web_prod_4c10f067c393ed8fc453d3930f8ab2b159973900',
'hearst': 'anvato_mcp_hearst_web_prod_5356c3de0fc7c90a3727b4863ca7fec3a4524a99', 'hearst': 'anvato_mcp_hearst_web_prod_5356c3de0fc7c90a3727b4863ca7fec3a4524a99',
'cbs': 'anvato_mcp_cbs_web_prod_02f26581ff80e5bda7aad28226a8d369037f2cbe', 'cbs': 'anvato_mcp_cbs_web_prod_02f26581ff80e5bda7aad28226a8d369037f2cbe',
'telemundo': 'anvato_mcp_telemundo_web_prod_c5278d51ad46fda4b6ca3d0ea44a7846a054f582', 'telemundo': 'anvato_mcp_telemundo_web_prod_c5278d51ad46fda4b6ca3d0ea44a7846a054f582'
}
def _generate_nfl_token(self, anvack, mcp_id):
reroute = self._download_json(
'https://api.nfl.com/v1/reroute', mcp_id, data=b'grant_type=client_credentials',
headers={'X-Domain-Id': 100}, note='Fetching token info')
token_type = reroute.get('token_type') or 'Bearer'
auth_token = f'{token_type} {reroute["access_token"]}'
response = self._download_json(
'https://api.nfl.com/v3/shield/', mcp_id, data=json.dumps({
'query': '''{
viewer {
mediaToken(anvack: "%s", id: %s) {
token
}
}
}''' % (anvack, mcp_id),
}).encode(), headers={
'Authorization': auth_token,
'Content-Type': 'application/json',
}, note='Fetching NFL API token')
return traverse_obj(response, ('data', 'viewer', 'mediaToken', 'token'))
_TOKEN_GENERATORS = {
'GXvEgwyJeWem8KCYXfeoHWknwP48Mboj': _generate_nfl_token,
} }
def _server_time(self, access_key, video_id): def _server_time(self, access_key, video_id):
@@ -232,8 +277,8 @@ class AnvatoIE(InfoExtractor):
server_time = self._server_time(access_key, video_id) server_time = self._server_time(access_key, video_id)
input_data = f'{server_time}~{md5_text(video_data_url)}~{md5_text(server_time)}' input_data = f'{server_time}~{md5_text(video_data_url)}~{md5_text(server_time)}'
auth_secret = bytes(aes_encrypt( auth_secret = intlist_to_bytes(aes_encrypt(
list(input_data[:64].encode()), list(self._AUTH_KEY))) bytes_to_intlist(input_data[:64]), bytes_to_intlist(self._AUTH_KEY)))
query = { query = {
'X-Anvato-Adst-Auth': base64.b64encode(auth_secret).decode('ascii'), 'X-Anvato-Adst-Auth': base64.b64encode(auth_secret).decode('ascii'),
'rtyp': 'fp', 'rtyp': 'fp',
@@ -245,6 +290,8 @@ class AnvatoIE(InfoExtractor):
} }
if extracted_token is not None: if extracted_token is not None:
api['anvstk2'] = extracted_token api['anvstk2'] = extracted_token
elif self._TOKEN_GENERATORS.get(access_key) is not None:
api['anvstk2'] = self._TOKEN_GENERATORS[access_key](self, access_key, video_id)
elif self._ANVACK_TABLE.get(access_key) is not None: elif self._ANVACK_TABLE.get(access_key) is not None:
api['anvstk'] = md5_text(f'{access_key}|{anvrid}|{server_time}|{self._ANVACK_TABLE[access_key]}') api['anvstk'] = md5_text(f'{access_key}|{anvrid}|{server_time}|{self._ANVACK_TABLE[access_key]}')
else: else:
@@ -252,7 +299,7 @@ class AnvatoIE(InfoExtractor):
return self._download_json( return self._download_json(
video_data_url, video_id, transform_source=strip_jsonp, query=query, video_data_url, video_id, transform_source=strip_jsonp, query=query,
data=json.dumps({'api': api}, separators=(',', ':')).encode()) data=json.dumps({'api': api}, separators=(',', ':')).encode('utf-8'))
def _get_anvato_videos(self, access_key, video_id, token): def _get_anvato_videos(self, access_key, video_id, token):
video_data = self._get_video_json(access_key, video_id, token) video_data = self._get_video_json(access_key, video_id, token)
@@ -311,7 +358,7 @@ class AnvatoIE(InfoExtractor):
for caption in video_data.get('captions', []): for caption in video_data.get('captions', []):
a_caption = { a_caption = {
'url': caption['url'], 'url': caption['url'],
'ext': 'tt' if caption.get('format') == 'SMPTE-TT' else None, 'ext': 'tt' if caption.get('format') == 'SMPTE-TT' else None
} }
subtitles.setdefault(caption['language'], []).append(a_caption) subtitles.setdefault(caption['language'], []).append(a_caption)
subtitles = self._merge_subtitles(subtitles, hls_subs, vtt_subs) subtitles = self._merge_subtitles(subtitles, hls_subs, vtt_subs)

View File

@@ -10,7 +10,6 @@ from ..utils import (
class AolIE(YahooIE): # XXX: Do not subclass from concrete IE class AolIE(YahooIE): # XXX: Do not subclass from concrete IE
_WORKING = False
IE_NAME = 'aol.com' IE_NAME = 'aol.com'
_VALID_URL = r'(?:aol-video:|https?://(?:www\.)?aol\.(?:com|ca|co\.uk|de|jp)/video/(?:[^/]+/)*)(?P<id>\d{9}|[0-9a-f]{24}|[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12})' _VALID_URL = r'(?:aol-video:|https?://(?:www\.)?aol\.(?:com|ca|co\.uk|de|jp)/video/(?:[^/]+/)*)(?P<id>\d{9}|[0-9a-f]{24}|[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12})'
@@ -30,7 +29,7 @@ class AolIE(YahooIE): # XXX: Do not subclass from concrete IE
'params': { 'params': {
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
}, }
}, { }, {
# video with vidible ID # video with vidible ID
'url': 'https://www.aol.com/video/view/netflix-is-raising-rates/5707d6b8e4b090497b04f706/', 'url': 'https://www.aol.com/video/view/netflix-is-raising-rates/5707d6b8e4b090497b04f706/',
@@ -46,7 +45,7 @@ class AolIE(YahooIE): # XXX: Do not subclass from concrete IE
'params': { 'params': {
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
}, }
}, { }, {
'url': 'https://www.aol.com/video/view/park-bench-season-2-trailer/559a1b9be4b0c3bfad3357a7/', 'url': 'https://www.aol.com/video/view/park-bench-season-2-trailer/559a1b9be4b0c3bfad3357a7/',
'only_matching': True, 'only_matching': True,
@@ -83,10 +82,10 @@ class AolIE(YahooIE): # XXX: Do not subclass from concrete IE
return self._extract_yahoo_video(video_id, 'us') return self._extract_yahoo_video(video_id, 'us')
response = self._download_json( response = self._download_json(
f'https://feedapi.b2c.on.aol.com/v1.0/app/videos/aolon/{video_id}/details', 'https://feedapi.b2c.on.aol.com/v1.0/app/videos/aolon/%s/details' % video_id,
video_id)['response'] video_id)['response']
if response['statusText'] != 'Ok': if response['statusText'] != 'Ok':
raise ExtractorError('{} said: {}'.format(self.IE_NAME, response['statusText']), expected=True) raise ExtractorError('%s said: %s' % (self.IE_NAME, response['statusText']), expected=True)
video_data = response['data'] video_data = response['data']
formats = [] formats = []

Some files were not shown because too many files have changed in this diff Show More