forked from Github/Axter-Stash
Removing Dev version
This commit is contained in:
@@ -1,108 +0,0 @@
|
||||
# FileMonitor: Ver 0.7.2 (By David Maisonave)
|
||||
FileMonitor is a [Stash](https://github.com/stashapp/stash) plugin with the following two main features:
|
||||
- Updates Stash when any file changes occurs in the Stash library.
|
||||
- Runs scheduled task based on the scheduler configuration in filemonitor_config.py.
|
||||
|
||||
## Starting FileMonitor from the UI
|
||||
From the GUI, FileMonitor can be started as a service or as a plugin. The recommended method is to start it as a service. When started as a service, it will jump on the Task Queue momentarily, and then disappear as it starts running in the background.
|
||||
- To start monitoring file changes, go to **Stash->Settings->Task->[Plugin Tasks]->FileMonitor**, and click on the [Start Library Monitor Service] button.
|
||||
- 
|
||||
- **Important Note**: At first, this will show up as a plugin in the Task Queue momentarily. It will then disappear from the Task Queue and run in the background as a service.
|
||||
- To stop FileMonitor click on [Stop Library Monitor] button.
|
||||
- The **[Run as a Plugin]** option is mainaly available for backwards compatibility and for test purposes.
|
||||
|
||||
|
||||
## Using FileMonitor as a script
|
||||
**FileMonitor** can be called as a standalone script.
|
||||
- To start monitoring call the script and pass --url and the Stash URL.
|
||||
- python filemonitor.py --url http://localhost:9999
|
||||
- To stop **FileMonitor**, pass argument **--stop**.
|
||||
- python filemonitor.py **--stop**
|
||||
- The stop command works to stop the standalone job and the Stash plugin task job.
|
||||
- To restart **FileMonitor**, pass argument **--restart**.
|
||||
- python filemonitor.py **--restart**
|
||||
- The restart command restarts FileMonitor as a Task in Stash.
|
||||
|
||||
# Reoccurring Task Scheduler
|
||||
To enable the scheduler go to **Stash->Settings->Plugins->Plugins->FileMonitor** and enable the **Scheduler** option.
|
||||

|
||||
|
||||
To configure the schedule or to add new task, edit the **task_reoccurring_scheduler** section in the **filemonitor_config.py** file.
|
||||
```` python
|
||||
"task_reoccurring_scheduler": [
|
||||
{"task" : "Clean", "hours" : 48}, # Maintenance -> [Clean] (every 2 days)
|
||||
{"task" : "Auto Tag", "hours" : 24}, # Auto Tag -> [Auto Tag] (Daily)
|
||||
{"task" : "Optimise Database", "hours" : 24}, # Maintenance -> [Optimise Database] (Daily)
|
||||
|
||||
# The following is the syntax used for plugins. A plugin task requires the plugin name for the [task] field, and the plugin-ID for the [pluginId] field.
|
||||
{"task" : "Create Tags", "pluginId" : "pathParser", "hours" : 0}, # This task requires plugin [Path Parser]. To enable this task change the zero to a positive number.
|
||||
|
||||
# Note: For a weekly task use the weekday method which is more reliable. The hour section in time MUST be a two digit number, and use military time format. Example: 1PM = "13:00"
|
||||
{"task" : "Generate", "weekday" : "sunday", "time" : "07:00"}, # Generated Content-> [Generate] (Every Sunday at 7AM)
|
||||
{"task" : "Scan", "weekday" : "sunday", "time" : "03:00"}, # Library -> [Scan] (Weekly) (Every Sunday at 3AM)
|
||||
|
||||
# To perform a task monthly, specify the day of the month as in the weekly schedule format, and add a monthly field.
|
||||
# The monthly field value must be 1, 2, 3, or 4.
|
||||
# 1 = 1st specified weekday of the month. Example 1st monday.
|
||||
# 2 = 2nd specified weekday of the month. Example 2nd monday of the month.
|
||||
# 3 = 3rd specified weekday of the month.
|
||||
# 4 = 4th specified weekday of the month.
|
||||
# Example monthly method.
|
||||
{"task" : "Backup", "weekday" : "saturday", "time" : "01:00", "monthly" : 2}, # Backup -> [Backup] 2nd saturday of the month at 1AM
|
||||
|
||||
# The following is a place holder for a plugin.
|
||||
{"task" : "PluginButtonName_Here", "pluginId" : "PluginId_Here", "hours" : 0}, # The zero frequency value makes this task disabled.
|
||||
# Add additional plugin task here.
|
||||
],
|
||||
````
|
||||
- To add plugins to the task list, both the Plugin-ID and the plugin name is required. The plugin ID is usually the file name of the script without the extension.
|
||||
- Task can be scheduled to run monthly, weekly, hourly, and by minutes.
|
||||
- The scheduler list uses two types of syntax. One is **frequency** based, and the other is **weekday** based.
|
||||
- **Frequency Based**
|
||||
- The frequency field can be in **minutes** or **hours**.
|
||||
- The frequency value must be a number greater than zero. A frequency value of zero will disable the task on the schedule.
|
||||
- **Frequency Based Examples**:
|
||||
- Starts a task every 24 hours.
|
||||
- `{"task" : "Auto Tag", "hours" : 24},`
|
||||
- Starts a (**plugin**) task every 30 minutes.
|
||||
- `{"task" : "Create Tags", "pluginId" : "pathParser", "minutes" : 30},`
|
||||
- **weekday Based**
|
||||
- Use the weekday based syntax for weekly and monthly schedules.
|
||||
- Both weekly and monthly schedules must have a **weekday** field and a **time** field, which specifies the day of the week and the time to start the task.
|
||||
- **Weekly**:
|
||||
- **Weekly Example**:
|
||||
- Starts a task weekly every monday and 9AM.
|
||||
- `{"task" : "Generate", "weekday" : "monday", "time" : "09:00"},`
|
||||
- **Monthly**:
|
||||
- The monthly syntax is similar to the weekly format, but it also includes a **"monthly"** field which must be set to 1, 2, 3, or 4.
|
||||
- **Monthly Examples**:
|
||||
- Starts a task once a month on the 3rd sunday of the month and at 1AM.
|
||||
- `{"task" : "Backup", "weekday" : "sunday", "time" : "01:00", "monthly" : 3},`
|
||||
- Starts a task at 2PM once a month on the 1st saturday of the month.
|
||||
- `{"task" : "Optimise Database", "weekday" : "saturday", "time" : "14:00", "monthly" : 1},`
|
||||
|
||||
- The scheduler feature requires `pip install schedule`
|
||||
- If the user leaves the scheduler disabled, **schedule** does NOT have to be installed.
|
||||
- For best results use the scheduler with FileMonitor running as a service.
|
||||
|
||||
## Requirements
|
||||
- pip install -r requirements.txt
|
||||
- Or manually install each requirement:
|
||||
- `pip install stashapp-tools --upgrade`
|
||||
- `pip install pyYAML`
|
||||
- `pip install watchdog`
|
||||
- `pip install schedule`
|
||||
|
||||
## Installation
|
||||
- Follow **Requirements** instructions.
|
||||
- In the stash plugin directory (C:\Users\MyUserName\.stash\plugins), create a folder named **FileMonitor**.
|
||||
- Copy all the plugin files to this folder.(**C:\Users\MyUserName\\.stash\plugins\FileMonitor**).
|
||||
- Click the **[Reload Plugins]** button in Stash->Settings->Plugins->Plugins.
|
||||
|
||||
That's it!!!
|
||||
|
||||
## Options
|
||||
- Main options are accessible in the GUI via Settings->Plugins->Plugins->[FileMonitor].
|
||||
- Additional options available in filemonitor_config.py.
|
||||
|
||||
|
||||
@@ -1,280 +0,0 @@
|
||||
import stashapi.log as stashLog # stashapi.log by default for error and critical logging
|
||||
from stashapi.stashapp import StashInterface
|
||||
from logging.handlers import RotatingFileHandler
|
||||
import inspect
|
||||
import sys
|
||||
import os
|
||||
import pathlib
|
||||
import logging
|
||||
import json
|
||||
import __main__
|
||||
|
||||
# StashPluginHelper (By David Maisonave aka Axter)
|
||||
# See end of this file for example usage
|
||||
# Log Features:
|
||||
# Can optionally log out to multiple outputs for each Log or Trace call.
|
||||
# Logging includes source code line number
|
||||
# Sets a maximum plugin log file size
|
||||
# Stash Interface Features:
|
||||
# Sets STASH_INTERFACE with StashInterface
|
||||
# Gets STASH_URL value from command line argument and/or from STDIN_READ
|
||||
# Sets FRAGMENT_SERVER based on command line arguments or STDIN_READ
|
||||
# Sets PLUGIN_ID based on the main script file name (in lower case)
|
||||
# Gets PLUGIN_TASK_NAME value
|
||||
# Sets pluginSettings (The plugin UI settings)
|
||||
# Misc Features:
|
||||
# Gets DRY_RUN value from command line argument and/or from UI and/or from config file
|
||||
# Gets DEBUG_TRACING value from command line argument and/or from UI and/or from config file
|
||||
# Sets RUNNING_IN_COMMAND_LINE_MODE to True if detects multiple arguments
|
||||
# Sets CALLED_AS_STASH_PLUGIN to True if it's able to read from STDIN_READ
|
||||
class StashPluginHelper:
|
||||
# Primary Members for external reference
|
||||
PLUGIN_TASK_NAME = None
|
||||
PLUGIN_ID = None
|
||||
PLUGIN_CONFIGURATION = None
|
||||
pluginSettings = None
|
||||
pluginConfig = None
|
||||
STASH_INTERFACE = None
|
||||
STASH_URL = None
|
||||
STASH_CONFIGURATION = None
|
||||
JSON_INPUT = None
|
||||
DEBUG_TRACING = False
|
||||
DRY_RUN = False
|
||||
CALLED_AS_STASH_PLUGIN = False
|
||||
RUNNING_IN_COMMAND_LINE_MODE = False
|
||||
|
||||
# printTo argument
|
||||
LOG_TO_FILE = 1
|
||||
LOG_TO_CONSOLE = 2 # Note: Only see output when running in command line mode. In plugin mode, this output is lost.
|
||||
LOG_TO_STDERR = 4 # Note: In plugin mode, output to StdErr ALWAYS gets sent to stash logging as an error.
|
||||
LOG_TO_STASH = 8
|
||||
LOG_TO_WARN = 16
|
||||
LOG_TO_ERROR = 32
|
||||
LOG_TO_CRITICAL = 64
|
||||
LOG_TO_ALL = LOG_TO_FILE + LOG_TO_CONSOLE + LOG_TO_STDERR + LOG_TO_STASH
|
||||
|
||||
# Misc class variables
|
||||
MAIN_SCRIPT_NAME = None
|
||||
LOG_LEVEL = logging.INFO
|
||||
LOG_FILE_DIR = None
|
||||
LOG_FILE_NAME = None
|
||||
STDIN_READ = None
|
||||
FRAGMENT_SERVER = None
|
||||
logger = None
|
||||
traceOncePreviousHits = []
|
||||
|
||||
# Prefix message value
|
||||
LEV_TRACE = "TRACE: "
|
||||
LEV_DBG = "DBG: "
|
||||
LEV_INF = "INF: "
|
||||
LEV_WRN = "WRN: "
|
||||
LEV_ERR = "ERR: "
|
||||
LEV_CRITICAL = "CRITICAL: "
|
||||
|
||||
# Default format
|
||||
LOG_FORMAT = "[%(asctime)s] %(message)s"
|
||||
|
||||
# Externally modifiable variables
|
||||
log_to_err_set = LOG_TO_FILE + LOG_TO_STDERR # This can be changed by the calling source in order to customize what targets get error messages
|
||||
log_to_norm = LOG_TO_FILE + LOG_TO_CONSOLE # Can be change so-as to set target output for normal logging
|
||||
log_to_wrn_set = LOG_TO_FILE + LOG_TO_STASH # This can be changed by the calling source in order to customize what targets get warning messages
|
||||
|
||||
def __init__(self,
|
||||
debugTracing = None, # Set debugTracing to True so as to output debug and trace logging
|
||||
logFormat = LOG_FORMAT, # Plugin log line format
|
||||
dateFmt = "%y%m%d %H:%M:%S", # Date format when logging to plugin log file
|
||||
maxbytes = 2*1024*1024, # Max size of plugin log file
|
||||
backupcount = 2, # Backup counts when log file size reaches max size
|
||||
logToWrnSet = 0, # Customize the target output set which will get warning logging
|
||||
logToErrSet = 0, # Customize the target output set which will get error logging
|
||||
logToNormSet = 0, # Customize the target output set which will get normal logging
|
||||
logFilePath = "", # Plugin log file. If empty, the log file name will be set based on current python file name and path
|
||||
mainScriptName = "", # The main plugin script file name (full path)
|
||||
pluginID = "",
|
||||
settings = None, # Default settings for UI fields
|
||||
config = None, # From pluginName_config.py or pluginName_setting.py
|
||||
fragmentServer = None,
|
||||
stash_url = None, # Stash URL (endpoint URL) Example: http://localhost:9999
|
||||
DebugTraceFieldName = "zzdebugTracing",
|
||||
DryRunFieldName = "zzdryRun"):
|
||||
if logToWrnSet: self.log_to_wrn_set = logToWrnSet
|
||||
if logToErrSet: self.log_to_err_set = logToErrSet
|
||||
if logToNormSet: self.log_to_norm = logToNormSet
|
||||
if stash_url and len(stash_url): self.STASH_URL = stash_url
|
||||
self.MAIN_SCRIPT_NAME = mainScriptName if mainScriptName != "" else __main__.__file__
|
||||
self.PLUGIN_ID = pluginID if pluginID != "" else pathlib.Path(self.MAIN_SCRIPT_NAME).stem.lower()
|
||||
# print(f"self.MAIN_SCRIPT_NAME={self.MAIN_SCRIPT_NAME}, self.PLUGIN_ID={self.PLUGIN_ID}", file=sys.stderr)
|
||||
self.LOG_FILE_NAME = logFilePath if logFilePath != "" else f"{pathlib.Path(self.MAIN_SCRIPT_NAME).resolve().parent}{os.sep}{pathlib.Path(self.MAIN_SCRIPT_NAME).stem}.log"
|
||||
self.LOG_FILE_DIR = pathlib.Path(self.LOG_FILE_NAME).resolve().parent
|
||||
RFH = RotatingFileHandler(
|
||||
filename=self.LOG_FILE_NAME,
|
||||
mode='a',
|
||||
maxBytes=maxbytes,
|
||||
backupCount=backupcount,
|
||||
encoding=None,
|
||||
delay=0
|
||||
)
|
||||
if fragmentServer:
|
||||
self.FRAGMENT_SERVER = fragmentServer
|
||||
else:
|
||||
self.FRAGMENT_SERVER = {'Scheme': 'http', 'Host': '0.0.0.0', 'Port': '9999', 'SessionCookie': {'Name': 'session', 'Value': '', 'Path': '', 'Domain': '', 'Expires': '0001-01-01T00:00:00Z', 'RawExpires': '', 'MaxAge': 0, 'Secure': False, 'HttpOnly': False, 'SameSite': 0, 'Raw': '', 'Unparsed': None}, 'Dir': os.path.dirname(pathlib.Path(self.MAIN_SCRIPT_NAME).resolve().parent), 'PluginDir': pathlib.Path(self.MAIN_SCRIPT_NAME).resolve().parent}
|
||||
|
||||
if debugTracing: self.DEBUG_TRACING = debugTracing
|
||||
if config:
|
||||
self.pluginConfig = config
|
||||
if DebugTraceFieldName in self.pluginConfig:
|
||||
self.DEBUG_TRACING = self.pluginConfig[DebugTraceFieldName]
|
||||
if DryRunFieldName in self.pluginConfig:
|
||||
self.DRY_RUN = self.pluginConfig[DryRunFieldName]
|
||||
|
||||
if len(sys.argv) > 1:
|
||||
RUNNING_IN_COMMAND_LINE_MODE = True
|
||||
if not debugTracing or not stash_url:
|
||||
for argValue in sys.argv[1:]:
|
||||
if argValue.lower() == "--trace":
|
||||
self.DEBUG_TRACING = True
|
||||
elif argValue.lower() == "--dry_run" or argValue.lower() == "--dryrun":
|
||||
self.DRY_RUN = True
|
||||
elif ":" in argValue and not self.STASH_URL:
|
||||
self.STASH_URL = argValue
|
||||
if self.STASH_URL:
|
||||
endpointUrlArr = self.STASH_URL.split(":")
|
||||
if len(endpointUrlArr) == 3:
|
||||
self.FRAGMENT_SERVER['Scheme'] = endpointUrlArr[0]
|
||||
self.FRAGMENT_SERVER['Host'] = endpointUrlArr[1][2:]
|
||||
self.FRAGMENT_SERVER['Port'] = endpointUrlArr[2]
|
||||
self.STASH_INTERFACE = self.ExtendStashInterface(self.FRAGMENT_SERVER)
|
||||
else:
|
||||
try:
|
||||
self.STDIN_READ = sys.stdin.read()
|
||||
self.CALLED_AS_STASH_PLUGIN = True
|
||||
except:
|
||||
pass
|
||||
if self.STDIN_READ:
|
||||
self.JSON_INPUT = json.loads(self.STDIN_READ)
|
||||
if "args" in self.JSON_INPUT and "mode" in self.JSON_INPUT["args"]:
|
||||
self.PLUGIN_TASK_NAME = self.JSON_INPUT["args"]["mode"]
|
||||
self.FRAGMENT_SERVER = self.JSON_INPUT["server_connection"]
|
||||
self.STASH_URL = f"{self.FRAGMENT_SERVER['Scheme']}://{self.FRAGMENT_SERVER['Host']}:{self.FRAGMENT_SERVER['Port']}"
|
||||
self.STASH_INTERFACE = self.ExtendStashInterface(self.FRAGMENT_SERVER)
|
||||
|
||||
if self.STASH_INTERFACE:
|
||||
self.PLUGIN_CONFIGURATION = self.STASH_INTERFACE.get_configuration()["plugins"]
|
||||
self.STASH_CONFIGURATION = self.STASH_INTERFACE.get_configuration()["general"]
|
||||
if settings:
|
||||
self.pluginSettings = settings
|
||||
if self.PLUGIN_ID in self.PLUGIN_CONFIGURATION:
|
||||
self.pluginSettings.update(self.PLUGIN_CONFIGURATION[self.PLUGIN_ID])
|
||||
if DebugTraceFieldName in self.pluginSettings:
|
||||
self.DEBUG_TRACING = self.pluginSettings[DebugTraceFieldName]
|
||||
if DryRunFieldName in self.pluginSettings:
|
||||
self.DRY_RUN = self.pluginSettings[DryRunFieldName]
|
||||
if self.DEBUG_TRACING: self.LOG_LEVEL = logging.DEBUG
|
||||
|
||||
logging.basicConfig(level=self.LOG_LEVEL, format=logFormat, datefmt=dateFmt, handlers=[RFH])
|
||||
self.logger = logging.getLogger(pathlib.Path(self.MAIN_SCRIPT_NAME).stem)
|
||||
|
||||
def Log(self, logMsg, printTo = 0, logLevel = logging.INFO, lineNo = -1, levelStr = "", logAlways = False):
|
||||
if printTo == 0:
|
||||
printTo = self.log_to_norm
|
||||
elif printTo == self.LOG_TO_ERROR and logLevel == logging.INFO:
|
||||
logLevel = logging.ERROR
|
||||
printTo = self.log_to_err_set
|
||||
elif printTo == self.LOG_TO_CRITICAL and logLevel == logging.INFO:
|
||||
logLevel = logging.CRITICAL
|
||||
printTo = self.log_to_err_set
|
||||
elif printTo == self.LOG_TO_WARN and logLevel == logging.INFO:
|
||||
logLevel = logging.WARN
|
||||
printTo = self.log_to_wrn_set
|
||||
if lineNo == -1:
|
||||
lineNo = inspect.currentframe().f_back.f_lineno
|
||||
LN_Str = f"[LN:{lineNo}]"
|
||||
# print(f"{LN_Str}, {logAlways}, {self.LOG_LEVEL}, {logging.DEBUG}, {levelStr}, {logMsg}")
|
||||
if logLevel == logging.DEBUG and (logAlways == False or self.LOG_LEVEL == logging.DEBUG):
|
||||
if levelStr == "": levelStr = self.LEV_DBG
|
||||
if printTo & self.LOG_TO_FILE: self.logger.debug(f"{LN_Str} {levelStr}{logMsg}")
|
||||
if printTo & self.LOG_TO_STASH: stashLog.debug(f"{LN_Str} {levelStr}{logMsg}")
|
||||
elif logLevel == logging.INFO or logLevel == logging.DEBUG:
|
||||
if levelStr == "": levelStr = self.LEV_INF if logLevel == logging.INFO else self.LEV_DBG
|
||||
if printTo & self.LOG_TO_FILE: self.logger.info(f"{LN_Str} {levelStr}{logMsg}")
|
||||
if printTo & self.LOG_TO_STASH: stashLog.info(f"{LN_Str} {levelStr}{logMsg}")
|
||||
elif logLevel == logging.WARN:
|
||||
if levelStr == "": levelStr = self.LEV_WRN
|
||||
if printTo & self.LOG_TO_FILE: self.logger.warning(f"{LN_Str} {levelStr}{logMsg}")
|
||||
if printTo & self.LOG_TO_STASH: stashLog.warning(f"{LN_Str} {levelStr}{logMsg}")
|
||||
elif logLevel == logging.ERROR:
|
||||
if levelStr == "": levelStr = self.LEV_ERR
|
||||
if printTo & self.LOG_TO_FILE: self.logger.error(f"{LN_Str} {levelStr}{logMsg}")
|
||||
if printTo & self.LOG_TO_STASH: stashLog.error(f"{LN_Str} {levelStr}{logMsg}")
|
||||
elif logLevel == logging.CRITICAL:
|
||||
if levelStr == "": levelStr = self.LEV_CRITICAL
|
||||
if printTo & self.LOG_TO_FILE: self.logger.critical(f"{LN_Str} {levelStr}{logMsg}")
|
||||
if printTo & self.LOG_TO_STASH: stashLog.error(f"{LN_Str} {levelStr}{logMsg}")
|
||||
if (printTo & self.LOG_TO_CONSOLE) and (logLevel != logging.DEBUG or self.DEBUG_TRACING or logAlways):
|
||||
print(f"{LN_Str} {levelStr}{logMsg}")
|
||||
if (printTo & self.LOG_TO_STDERR) and (logLevel != logging.DEBUG or self.DEBUG_TRACING or logAlways):
|
||||
print(f"StdErr: {LN_Str} {levelStr}{logMsg}", file=sys.stderr)
|
||||
|
||||
def Trace(self, logMsg = "", printTo = 0, logAlways = False):
|
||||
if printTo == 0: printTo = self.LOG_TO_FILE
|
||||
lineNo = inspect.currentframe().f_back.f_lineno
|
||||
logLev = logging.INFO if logAlways else logging.DEBUG
|
||||
if self.DEBUG_TRACING or logAlways:
|
||||
if logMsg == "":
|
||||
logMsg = f"Line number {lineNo}..."
|
||||
self.Log(logMsg, printTo, logLev, lineNo, self.LEV_TRACE, logAlways)
|
||||
|
||||
# Log once per session. Only logs the first time called from a particular line number in the code.
|
||||
def TraceOnce(self, logMsg = "", printTo = 0, logAlways = False):
|
||||
if printTo == 0: printTo = self.LOG_TO_FILE
|
||||
lineNo = inspect.currentframe().f_back.f_lineno
|
||||
logLev = logging.INFO if logAlways else logging.DEBUG
|
||||
if self.DEBUG_TRACING or logAlways:
|
||||
FuncAndLineNo = f"{inspect.currentframe().f_back.f_code.co_name}:{lineNo}"
|
||||
if FuncAndLineNo in self.traceOncePreviousHits:
|
||||
return
|
||||
self.traceOncePreviousHits.append(FuncAndLineNo)
|
||||
if logMsg == "":
|
||||
logMsg = f"Line number {lineNo}..."
|
||||
self.Log(logMsg, printTo, logLev, lineNo, self.LEV_TRACE, logAlways)
|
||||
|
||||
def Warn(self, logMsg, printTo = 0):
|
||||
if printTo == 0: printTo = self.log_to_wrn_set
|
||||
lineNo = inspect.currentframe().f_back.f_lineno
|
||||
self.Log(logMsg, printTo, logging.WARN, lineNo)
|
||||
|
||||
def Error(self, logMsg, printTo = 0):
|
||||
if printTo == 0: printTo = self.log_to_err_set
|
||||
lineNo = inspect.currentframe().f_back.f_lineno
|
||||
self.Log(logMsg, printTo, logging.ERROR, lineNo)
|
||||
|
||||
def Status(self, printTo = 0, logLevel = logging.INFO, lineNo = -1):
|
||||
if printTo == 0: printTo = self.log_to_norm
|
||||
if lineNo == -1:
|
||||
lineNo = inspect.currentframe().f_back.f_lineno
|
||||
self.Log(f"StashPluginHelper Status: (CALLED_AS_STASH_PLUGIN={self.CALLED_AS_STASH_PLUGIN}), (RUNNING_IN_COMMAND_LINE_MODE={self.RUNNING_IN_COMMAND_LINE_MODE}), (DEBUG_TRACING={self.DEBUG_TRACING}), (DRY_RUN={self.DRY_RUN}), (PLUGIN_ID={self.PLUGIN_ID}), (PLUGIN_TASK_NAME={self.PLUGIN_TASK_NAME}), (STASH_URL={self.STASH_URL}), (MAIN_SCRIPT_NAME={self.MAIN_SCRIPT_NAME})",
|
||||
printTo, logLevel, lineNo)
|
||||
|
||||
# Extends class StashInterface with functions which are not yet in the class
|
||||
class ExtendStashInterface(StashInterface):
|
||||
def metadata_autotag(self, paths:list=[], dry_run=False):
|
||||
if not paths:
|
||||
return
|
||||
|
||||
query = """
|
||||
mutation MetadataAutoTag($input:AutoTagMetadataInput!) {
|
||||
metadataAutoTag(input: $input)
|
||||
}
|
||||
"""
|
||||
|
||||
metadata_autotag_input = {
|
||||
"paths": paths
|
||||
}
|
||||
result = self.call_GQL(query, {"input": metadata_autotag_input})
|
||||
return result
|
||||
|
||||
def backup_database(self):
|
||||
return self.call_GQL("mutation { backupDatabase(input: {download: false})}")
|
||||
|
||||
def optimise_database(self):
|
||||
return self.call_GQL("mutation OptimiseDatabase { optimiseDatabase }")
|
||||
@@ -1,393 +0,0 @@
|
||||
# Description: This is a Stash plugin which updates Stash if any changes occurs in the Stash library paths.
|
||||
# By David Maisonave (aka Axter) Jul-2024 (https://www.axter.com/)
|
||||
# Get the latest developers version from following link: https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/FileMonitor
|
||||
# Note: To call this script outside of Stash, pass --url and the Stash URL.
|
||||
# Example: python filemonitor.py --url http://localhost:9999
|
||||
import os, sys, time, pathlib, argparse
|
||||
from StashPluginHelper import StashPluginHelper
|
||||
import watchdog # pip install watchdog # https://pythonhosted.org/watchdog/
|
||||
from watchdog.observers import Observer # This is also needed for event attributes
|
||||
from threading import Lock, Condition
|
||||
from multiprocessing import shared_memory
|
||||
from filemonitor_config import config # Import settings from filemonitor_config.py
|
||||
|
||||
CONTINUE_RUNNING_SIG = 99
|
||||
STOP_RUNNING_SIG = 32
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--url', '-u', dest='stash_url', type=str, help='Add Stash URL')
|
||||
parser.add_argument('--trace', '-t', dest='trace', action='store_true', help='Enables debug trace mode.')
|
||||
parser.add_argument('--stop', '-s', dest='stop', action='store_true', help='Stop (kill) a running FileMonitor task.')
|
||||
parser.add_argument('--restart', '-r', dest='restart', action='store_true', help='Restart FileMonitor.')
|
||||
parser.add_argument('--silent', '--quit', '-q', dest='quit', action='store_true', help='Run in silent mode. No output to console or stderr. Use this when running from pythonw.exe')
|
||||
parse_args = parser.parse_args()
|
||||
|
||||
logToErrSet = 0
|
||||
logToNormSet = 0
|
||||
if parse_args.quit:
|
||||
logToErrSet = 1
|
||||
logToNormSet = 1
|
||||
|
||||
settings = {
|
||||
"recursiveDisabled": False,
|
||||
"turnOnScheduler": False,
|
||||
"zzdebugTracing": False,
|
||||
"zzdryRun": False,
|
||||
}
|
||||
plugin = StashPluginHelper(
|
||||
stash_url=parse_args.stash_url,
|
||||
debugTracing=parse_args.trace,
|
||||
settings=settings,
|
||||
config=config,
|
||||
logToErrSet=logToErrSet,
|
||||
logToNormSet=logToNormSet
|
||||
)
|
||||
plugin.Status()
|
||||
plugin.Log(f"\nStarting (__file__={__file__}) (plugin.CALLED_AS_STASH_PLUGIN={plugin.CALLED_AS_STASH_PLUGIN}) (plugin.DEBUG_TRACING={plugin.DEBUG_TRACING}) (plugin.DRY_RUN={plugin.DRY_RUN}) (plugin.PLUGIN_TASK_NAME={plugin.PLUGIN_TASK_NAME})************************************************")
|
||||
|
||||
exitMsg = "Change success!!"
|
||||
mutex = Lock()
|
||||
signal = Condition(mutex)
|
||||
shouldUpdate = False
|
||||
TargetPaths = []
|
||||
|
||||
SHAREDMEMORY_NAME = "DavidMaisonaveAxter_FileMonitor"
|
||||
RECURSIVE = plugin.pluginSettings["recursiveDisabled"] == False
|
||||
SCAN_MODIFIED = plugin.pluginConfig["scanModified"]
|
||||
RUN_CLEAN_AFTER_DELETE = plugin.pluginConfig["runCleanAfterDelete"]
|
||||
RUN_GENERATE_CONTENT = plugin.pluginConfig['runGenerateContent']
|
||||
SCAN_ON_ANY_EVENT = plugin.pluginConfig['onAnyEvent']
|
||||
SIGNAL_TIMEOUT = plugin.pluginConfig['timeOut']
|
||||
|
||||
CREATE_SPECIAL_FILE_TO_EXIT = plugin.pluginConfig['createSpecFileToExit']
|
||||
DELETE_SPECIAL_FILE_ON_STOP = plugin.pluginConfig['deleteSpecFileInStop']
|
||||
SPECIAL_FILE_DIR = f"{plugin.LOG_FILE_DIR}{os.sep}working"
|
||||
if not os.path.exists(SPECIAL_FILE_DIR) and CREATE_SPECIAL_FILE_TO_EXIT:
|
||||
os.makedirs(SPECIAL_FILE_DIR)
|
||||
# Unique name to trigger shutting down FileMonitor
|
||||
SPECIAL_FILE_NAME = f"{SPECIAL_FILE_DIR}{os.sep}trigger_to_kill_filemonitor_by_david_maisonave.txt"
|
||||
|
||||
STASHPATHSCONFIG = plugin.STASH_CONFIGURATION['stashes']
|
||||
stashPaths = []
|
||||
for item in STASHPATHSCONFIG:
|
||||
stashPaths.append(item["path"])
|
||||
plugin.Trace(f"(stashPaths={stashPaths})")
|
||||
|
||||
if plugin.DRY_RUN:
|
||||
plugin.Log("Dry run mode is enabled.")
|
||||
plugin.Trace(f"(SCAN_MODIFIED={SCAN_MODIFIED}) (SCAN_ON_ANY_EVENT={SCAN_ON_ANY_EVENT}) (RECURSIVE={RECURSIVE})")
|
||||
|
||||
FileMonitorPluginIsOnTaskQue = plugin.CALLED_AS_STASH_PLUGIN
|
||||
StopLibraryMonitorWaitingInTaskQueue = False
|
||||
JobIdInTheQue = 0
|
||||
def isJobWaitingToRun():
|
||||
global StopLibraryMonitorWaitingInTaskQueue
|
||||
global JobIdInTheQue
|
||||
global FileMonitorPluginIsOnTaskQue
|
||||
FileMonitorPluginIsOnTaskQue = False
|
||||
jobIsWaiting = False
|
||||
taskQue = plugin.STASH_INTERFACE.job_queue()
|
||||
for jobDetails in taskQue:
|
||||
plugin.Trace(f"(Job ID({jobDetails['id']})={jobDetails})")
|
||||
if jobDetails['status'] == "READY":
|
||||
if jobDetails['description'] == "Running plugin task: Stop Library Monitor":
|
||||
StopLibraryMonitorWaitingInTaskQueue = True
|
||||
JobIdInTheQue = jobDetails['id']
|
||||
jobIsWaiting = True
|
||||
elif jobDetails['status'] == "RUNNING" and jobDetails['description'].find("Start Library Monitor") > -1:
|
||||
FileMonitorPluginIsOnTaskQue = True
|
||||
JobIdInTheQue = 0
|
||||
return jobIsWaiting
|
||||
|
||||
if plugin.CALLED_AS_STASH_PLUGIN:
|
||||
plugin.Trace(f"isJobWaitingToRun() = {isJobWaitingToRun()})")
|
||||
|
||||
# Reoccurring scheduler code
|
||||
# ToDo: Change the following functions into a class called reoccurringScheduler
|
||||
def runTask(task):
|
||||
import datetime
|
||||
plugin.Trace(f"Running task {task}")
|
||||
if 'monthly' in task:
|
||||
dayOfTheMonth = datetime.datetime.today().day
|
||||
FirstAllowedDate = ((task['monthly'] - 1) * 7) + 1
|
||||
LastAllowedDate = task['monthly'] * 7
|
||||
if dayOfTheMonth < FirstAllowedDate or dayOfTheMonth > LastAllowedDate:
|
||||
plugin.Log(f"Skipping task {task['task']} because today is not the right {task['weekday']} of the month. Target range is between {FirstAllowedDate} and {LastAllowedDate}.")
|
||||
return
|
||||
if task['task'] == "Clean":
|
||||
plugin.STASH_INTERFACE.metadata_clean(paths=stashPaths, dry_run=plugin.DRY_RUN)
|
||||
elif task['task'] == "Generate":
|
||||
plugin.STASH_INTERFACE.metadata_generate()
|
||||
elif task['task'] == "Backup":
|
||||
plugin.STASH_INTERFACE.call_GQL("mutation { backupDatabase(input: {download: false})}")
|
||||
elif task['task'] == "Scan":
|
||||
plugin.STASH_INTERFACE.metadata_scan(paths=stashPaths)
|
||||
elif task['task'] == "Auto Tag":
|
||||
plugin.STASH_INTERFACE.metadata_autotag(paths=stashPaths, dry_run=plugin.DRY_RUN)
|
||||
elif task['task'] == "Optimise Database":
|
||||
plugin.STASH_INTERFACE.optimise_database()
|
||||
else:
|
||||
# ToDo: Add code to check if plugin is installed.
|
||||
plugin.Trace(f"Running plugin task pluginID={task['pluginId']}, task name = {task['task']}")
|
||||
plugin.STASH_INTERFACE.run_plugin_task(plugin_id=task['pluginId'], task_name=task['task'])
|
||||
def reoccurringScheduler():
|
||||
import schedule # pip install schedule # https://github.com/dbader/schedule
|
||||
# ToDo: Extend schedule class so it works persistently (remember schedule between restarts)
|
||||
# Or replace schedule with apscheduler https://github.com/agronholm/apscheduler
|
||||
dayOfTheWeek = ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"]
|
||||
for task in plugin.pluginConfig['task_reoccurring_scheduler']:
|
||||
if 'hours' in task and task['hours'] > 0:
|
||||
plugin.Log(f"Adding to reoccurring scheduler task '{task['task']}' at {task['hours']} hours interval")
|
||||
schedule.every(task['hours']).hours.do(runTask, task)
|
||||
elif 'minutes' in task and task['minutes'] > 0:
|
||||
plugin.Log(f"Adding to reoccurring scheduler task '{task['task']}' at {task['minutes']} minutes interval")
|
||||
schedule.every(task['minutes']).minutes.do(runTask, task)
|
||||
elif 'days' in task and task['days'] > 0:
|
||||
plugin.Log(f"Adding to reoccurring scheduler task '{task['task']}' at {task['days']} days interval")
|
||||
schedule.every(task['days']).days.do(runTask, task)
|
||||
elif 'weekday' in task and task['weekday'].lower() in dayOfTheWeek and 'time' in task:
|
||||
if 'monthly' in task:
|
||||
plugin.Log(f"Adding to reoccurring scheduler task '{task['task']}' monthly on number {task['monthly']} {task['weekday']} at {task['time']}")
|
||||
else:
|
||||
plugin.Log(f"Adding to reoccurring scheduler task '{task['task']}' (weekly) every {task['weekday']} at {task['time']}")
|
||||
if task['weekday'].lower() == "monday":
|
||||
schedule.every().monday.at(task['time']).do(runTask, task)
|
||||
elif task['weekday'].lower() == "tuesday":
|
||||
schedule.every().tuesday.at(task['time']).do(runTask, task)
|
||||
elif task['weekday'].lower() == "wednesday":
|
||||
schedule.every().wednesday.at(task['time']).do(runTask, task)
|
||||
elif task['weekday'].lower() == "thursday":
|
||||
schedule.every().thursday.at(task['time']).do(runTask, task)
|
||||
elif task['weekday'].lower() == "friday":
|
||||
schedule.every().friday.at(task['time']).do(runTask, task)
|
||||
elif task['weekday'].lower() == "saturday":
|
||||
schedule.every().saturday.at(task['time']).do(runTask, task)
|
||||
elif task['weekday'].lower() == "sunday":
|
||||
schedule.every().sunday.at(task['time']).do(runTask, task)
|
||||
def checkSchedulePending():
|
||||
import schedule # pip install schedule # https://github.com/dbader/schedule
|
||||
schedule.run_pending()
|
||||
if plugin.pluginSettings['turnOnScheduler']:
|
||||
reoccurringScheduler()
|
||||
|
||||
def start_library_monitor():
|
||||
global shouldUpdate
|
||||
global TargetPaths
|
||||
try:
|
||||
# Create shared memory buffer which can be used as singleton logic or to get a signal to quit task from external script
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=True, size=4)
|
||||
except:
|
||||
plugin.Error(f"Could not open shared memory map ({SHAREDMEMORY_NAME}). Change File Monitor must be running. Can not run multiple instance of Change File Monitor. Stop FileMonitor before trying to start it again.")
|
||||
return
|
||||
type(shm_a.buf)
|
||||
shm_buffer = shm_a.buf
|
||||
len(shm_buffer)
|
||||
shm_buffer[0] = CONTINUE_RUNNING_SIG
|
||||
plugin.Trace(f"Shared memory map opended, and flag set to {shm_buffer[0]}")
|
||||
RunCleanMetadata = False
|
||||
|
||||
event_handler = watchdog.events.FileSystemEventHandler()
|
||||
def on_created(event):
|
||||
global shouldUpdate
|
||||
global TargetPaths
|
||||
TargetPaths.append(event.src_path)
|
||||
plugin.Log(f"CREATE *** '{event.src_path}'")
|
||||
with mutex:
|
||||
shouldUpdate = True
|
||||
signal.notify()
|
||||
|
||||
def on_deleted(event):
|
||||
global shouldUpdate
|
||||
global TargetPaths
|
||||
nonlocal RunCleanMetadata
|
||||
TargetPaths.append(event.src_path)
|
||||
plugin.Log(f"DELETE *** '{event.src_path}'")
|
||||
with mutex:
|
||||
shouldUpdate = True
|
||||
RunCleanMetadata = True
|
||||
signal.notify()
|
||||
|
||||
def on_modified(event):
|
||||
global shouldUpdate
|
||||
global TargetPaths
|
||||
if SCAN_MODIFIED:
|
||||
TargetPaths.append(event.src_path)
|
||||
plugin.Log(f"MODIFIED *** '{event.src_path}'")
|
||||
with mutex:
|
||||
shouldUpdate = True
|
||||
signal.notify()
|
||||
else:
|
||||
plugin.TraceOnce(f"Ignoring modifications due to plugin UI setting. path='{event.src_path}'")
|
||||
|
||||
def on_moved(event):
|
||||
global shouldUpdate
|
||||
global TargetPaths
|
||||
TargetPaths.append(event.src_path)
|
||||
TargetPaths.append(event.dest_path)
|
||||
plugin.Log(f"MOVE *** from '{event.src_path}' to '{event.dest_path}'")
|
||||
with mutex:
|
||||
shouldUpdate = True
|
||||
signal.notify()
|
||||
|
||||
def on_any_event(event):
|
||||
global shouldUpdate
|
||||
global TargetPaths
|
||||
if SCAN_ON_ANY_EVENT:
|
||||
plugin.Log(f"Any-Event *** '{event.src_path}'")
|
||||
TargetPaths.append(event.src_path)
|
||||
with mutex:
|
||||
shouldUpdate = True
|
||||
signal.notify()
|
||||
else:
|
||||
plugin.TraceOnce("Ignoring on_any_event trigger.")
|
||||
|
||||
event_handler.on_created = on_created
|
||||
event_handler.on_deleted = on_deleted
|
||||
event_handler.on_modified = on_modified
|
||||
event_handler.on_moved = on_moved
|
||||
event_handler.on_any_event = on_any_event
|
||||
|
||||
observer = Observer()
|
||||
|
||||
# Iterate through stashPaths
|
||||
for path in stashPaths:
|
||||
observer.schedule(event_handler, path, recursive=RECURSIVE)
|
||||
plugin.Trace(f"Observing {path}")
|
||||
observer.schedule(event_handler, SPECIAL_FILE_DIR, recursive=RECURSIVE)
|
||||
plugin.Trace(f"Observing FileMonitor path {SPECIAL_FILE_DIR}")
|
||||
observer.start()
|
||||
JobIsRunning = False
|
||||
PutPluginBackOnTaskQueAndExit = False
|
||||
plugin.Trace("Starting loop")
|
||||
try:
|
||||
while True:
|
||||
TmpTargetPaths = []
|
||||
with mutex:
|
||||
while not shouldUpdate:
|
||||
plugin.Trace("While not shouldUpdate")
|
||||
if plugin.CALLED_AS_STASH_PLUGIN and isJobWaitingToRun():
|
||||
if FileMonitorPluginIsOnTaskQue:
|
||||
plugin.Log(f"Another task (JobID={JobIdInTheQue}) is waiting on the queue. Will restart FileMonitor to allow other task to run.")
|
||||
JobIsRunning = True
|
||||
break
|
||||
else:
|
||||
plugin.Warn("Not restarting because FileMonitor is no longer on Task Queue")
|
||||
if shm_buffer[0] != CONTINUE_RUNNING_SIG:
|
||||
plugin.Log(f"Breaking out of loop. (shm_buffer[0]={shm_buffer[0]})")
|
||||
break
|
||||
if plugin.pluginSettings['turnOnScheduler']:
|
||||
checkSchedulePending()
|
||||
plugin.Trace("Wait start")
|
||||
signal.wait(timeout=SIGNAL_TIMEOUT)
|
||||
plugin.Trace("Wait end")
|
||||
shouldUpdate = False
|
||||
TmpTargetPaths = []
|
||||
for TargetPath in TargetPaths:
|
||||
TmpTargetPaths.append(os.path.dirname(TargetPath))
|
||||
if TargetPath == SPECIAL_FILE_DIR:
|
||||
if os.path.isfile(SPECIAL_FILE_NAME):
|
||||
shm_buffer[0] = STOP_RUNNING_SIG
|
||||
plugin.Log(f"[SpFl]Detected trigger file to kill FileMonitor. {SPECIAL_FILE_NAME}", printTo = plugin.LOG_TO_FILE + plugin.LOG_TO_CONSOLE + plugin.LOG_TO_STASH)
|
||||
else:
|
||||
plugin.Trace(f"[SpFl]Did not find file {SPECIAL_FILE_NAME}.")
|
||||
TargetPaths = []
|
||||
TmpTargetPaths = list(set(TmpTargetPaths))
|
||||
if TmpTargetPaths != []:
|
||||
plugin.Log(f"Triggering Stash scan for path(s) {TmpTargetPaths}")
|
||||
if len(TmpTargetPaths) > 1 or TmpTargetPaths[0] != SPECIAL_FILE_DIR:
|
||||
if not plugin.DRY_RUN:
|
||||
plugin.STASH_INTERFACE.metadata_scan(paths=TmpTargetPaths)
|
||||
if RUN_CLEAN_AFTER_DELETE and RunCleanMetadata:
|
||||
plugin.STASH_INTERFACE.metadata_clean(paths=TmpTargetPaths, dry_run=plugin.DRY_RUN)
|
||||
if RUN_GENERATE_CONTENT:
|
||||
plugin.STASH_INTERFACE.metadata_generate()
|
||||
if plugin.CALLED_AS_STASH_PLUGIN and shm_buffer[0] == CONTINUE_RUNNING_SIG and FileMonitorPluginIsOnTaskQue:
|
||||
PutPluginBackOnTaskQueAndExit = True
|
||||
else:
|
||||
plugin.Trace("Nothing to scan.")
|
||||
|
||||
if shm_buffer[0] != CONTINUE_RUNNING_SIG or StopLibraryMonitorWaitingInTaskQueue:
|
||||
plugin.Log(f"Exiting Change File Monitor. (shm_buffer[0]={shm_buffer[0]}) (StopLibraryMonitorWaitingInTaskQueue={StopLibraryMonitorWaitingInTaskQueue})")
|
||||
shm_a.close()
|
||||
shm_a.unlink() # Call unlink only once to release the shared memory
|
||||
raise KeyboardInterrupt
|
||||
elif JobIsRunning or PutPluginBackOnTaskQueAndExit:
|
||||
plugin.STASH_INTERFACE.run_plugin_task(plugin_id=plugin.PLUGIN_ID, task_name="Start Library Monitor")
|
||||
plugin.Trace("Exiting plugin so that other task can run.")
|
||||
return
|
||||
except KeyboardInterrupt:
|
||||
observer.stop()
|
||||
plugin.Trace("Stopping observer")
|
||||
if os.path.isfile(SPECIAL_FILE_NAME):
|
||||
os.remove(SPECIAL_FILE_NAME)
|
||||
observer.join()
|
||||
plugin.Trace("Exiting function")
|
||||
|
||||
# Example: python filemonitor.py --stop
|
||||
def stop_library_monitor():
|
||||
if CREATE_SPECIAL_FILE_TO_EXIT:
|
||||
if os.path.isfile(SPECIAL_FILE_NAME):
|
||||
os.remove(SPECIAL_FILE_NAME)
|
||||
pathlib.Path(SPECIAL_FILE_NAME).touch()
|
||||
if DELETE_SPECIAL_FILE_ON_STOP:
|
||||
os.remove(SPECIAL_FILE_NAME)
|
||||
plugin.Trace("Opening shared memory map.")
|
||||
try:
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=False, size=4)
|
||||
except:
|
||||
# If FileMonitor is running as plugin, then it's expected behavior that SharedMemory will not be avialable.
|
||||
plugin.Trace(f"Could not open shared memory map ({SHAREDMEMORY_NAME}). Change File Monitor must not be running.")
|
||||
return
|
||||
type(shm_a.buf)
|
||||
shm_buffer = shm_a.buf
|
||||
len(shm_buffer)
|
||||
shm_buffer[0] = STOP_RUNNING_SIG
|
||||
plugin.Trace(f"Shared memory map opended, and flag set to {shm_buffer[0]}")
|
||||
shm_a.close()
|
||||
shm_a.unlink() # Call unlink only once to release the shared memory
|
||||
|
||||
def start_library_monitor_service():
|
||||
import subprocess
|
||||
import platform
|
||||
# First check if FileMonitor is already running
|
||||
try:
|
||||
shm_a = shared_memory.SharedMemory(name=SHAREDMEMORY_NAME, create=False, size=4)
|
||||
shm_a.close()
|
||||
shm_a.unlink()
|
||||
plugin.Error("FileMonitor is already running. Need to stop FileMonitor before trying to start it again.")
|
||||
return
|
||||
except:
|
||||
pass
|
||||
plugin.Trace("FileMonitor is not running, so safe to start it as a service.")
|
||||
is_windows = any(platform.win32_ver())
|
||||
PythonExe = f"{sys.executable}"
|
||||
# PythonExe = PythonExe.replace("python.exe", "pythonw.exe")
|
||||
args = [f"{PythonExe}", f"{pathlib.Path(__file__).resolve().parent}{os.sep}filemonitor.py", '--url', f"{plugin.STASH_URL}"]
|
||||
plugin.Trace(f"args={args}")
|
||||
if is_windows:
|
||||
plugin.Trace("Executing process using Windows DETACHED_PROCESS")
|
||||
DETACHED_PROCESS = 0x00000008
|
||||
pid = subprocess.Popen(args,creationflags=DETACHED_PROCESS, shell=True).pid
|
||||
else:
|
||||
plugin.Trace("Executing process using normal Popen")
|
||||
pid = subprocess.Popen(args).pid
|
||||
plugin.Trace(f"pid={pid}")
|
||||
|
||||
if parse_args.stop or parse_args.restart or plugin.PLUGIN_TASK_NAME == "stop_library_monitor":
|
||||
stop_library_monitor()
|
||||
if parse_args.restart:
|
||||
time.sleep(5)
|
||||
plugin.STASH_INTERFACE.run_plugin_task(plugin_id=plugin.PLUGIN_ID, task_name="Start Library Monitor")
|
||||
plugin.Trace(f"Restart FileMonitor EXIT")
|
||||
else:
|
||||
plugin.Trace(f"Stop FileMonitor EXIT")
|
||||
elif plugin.PLUGIN_TASK_NAME == "start_library_monitor_service":
|
||||
start_library_monitor_service()
|
||||
plugin.Trace(f"start_library_monitor_service EXIT")
|
||||
elif plugin.PLUGIN_TASK_NAME == "start_library_monitor" or not plugin.CALLED_AS_STASH_PLUGIN:
|
||||
start_library_monitor()
|
||||
plugin.Trace(f"start_library_monitor EXIT")
|
||||
else:
|
||||
plugin.Log(f"Nothing to do!!! (plugin.PLUGIN_TASK_NAME={plugin.PLUGIN_TASK_NAME})")
|
||||
|
||||
plugin.Trace("\n*********************************\nEXITING ***********************\n*********************************")
|
||||
@@ -1,38 +0,0 @@
|
||||
name: FileMonitor
|
||||
description: Monitors the Stash library folders, and updates Stash if any changes occurs in the Stash library paths.
|
||||
version: 0.7.2
|
||||
url: https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/FileMonitor
|
||||
settings:
|
||||
recursiveDisabled:
|
||||
displayName: No Recursive
|
||||
description: Enable to STOP monitoring paths recursively.
|
||||
type: BOOLEAN
|
||||
turnOnScheduler:
|
||||
displayName: Scheduler
|
||||
description: Enable to turn on the scheduler. See filemonitor_config.py for more details.
|
||||
type: BOOLEAN
|
||||
zzdebugTracing:
|
||||
displayName: Debug Tracing
|
||||
description: (Default=false) [***For Advanced Users***] Enable debug tracing. When enabled, additional tracing logging is added to Stash\plugins\FileMonitor\filemonitor.log
|
||||
type: BOOLEAN
|
||||
zzdryRun:
|
||||
displayName: Dry Run
|
||||
description: Enable to run script in [Dry Run] mode. In this mode, Stash does NOT call meta_scan, and only logs the action it would have taken.
|
||||
type: BOOLEAN
|
||||
exec:
|
||||
- python
|
||||
- "{pluginDir}/filemonitor.py"
|
||||
interface: raw
|
||||
tasks:
|
||||
- name: Start Library Monitor Service
|
||||
description: Run as a SERVICE to monitors paths in Stash library for media file changes, and updates Stash. Recommended start method.
|
||||
defaultArgs:
|
||||
mode: start_library_monitor_service
|
||||
- name: Stop Library Monitor
|
||||
description: Stops library monitoring within 2 minute.
|
||||
defaultArgs:
|
||||
mode: stop_library_monitor
|
||||
- name: Run as a Plugin
|
||||
description: Run [Library Monitor] as a plugin (*not recommended method*)
|
||||
defaultArgs:
|
||||
mode: start_library_monitor
|
||||
@@ -1,60 +0,0 @@
|
||||
# Description: This is a Stash plugin which updates Stash if any changes occurs in the Stash library paths.
|
||||
# By David Maisonave (aka Axter) Jul-2024 (https://www.axter.com/)
|
||||
# Get the latest developers version from following link: https://github.com/David-Maisonave/Axter-Stash/tree/main/plugins/FileMonitor
|
||||
config = {
|
||||
# Enable to run metadata_generate (Generate Content) after metadata scan.
|
||||
"runGenerateContent": False,
|
||||
# Enable to run scan when triggered by on_any_event.
|
||||
"onAnyEvent": False,
|
||||
# Enable to monitor changes in file system for modification flag. This option is NOT needed for Windows, because on Windows changes are triggered via CREATE, DELETE, and MOVE flags. Other OS may differ.
|
||||
"scanModified": False,
|
||||
# Timeout in seconds. This is how often it will check if another job (Task) is in the queue.
|
||||
"timeOut": 60, # Not needed when running in command line mode.
|
||||
# Enable to exit FileMonitor by creating special file in plugin folder\working
|
||||
"createSpecFileToExit": True,
|
||||
# Enable to delete special file imediately after it's created in stop process
|
||||
"deleteSpecFileInStop": False,
|
||||
# Enable to run metadata clean task after file deletion.
|
||||
"runCleanAfterDelete": False,
|
||||
|
||||
# The reoccurring scheduler task list.
|
||||
# Task can be scheduled to run monthly, weekly, hourly, and by minutes. For best results use the scheduler with FileMonitor running as a service.
|
||||
# The frequency field can be in minutes or hours. A zero frequency value disables the task.
|
||||
# For weekly and monthly task, use the syntax as done in the **Generate** and **Backup** task below.
|
||||
"task_reoccurring_scheduler": [
|
||||
{"task" : "Clean", "hours" : 48}, # Maintenance -> [Clean] (every 2 days)
|
||||
{"task" : "Auto Tag", "hours" : 24}, # Auto Tag -> [Auto Tag] (Daily)
|
||||
{"task" : "Optimise Database", "hours" : 24}, # Maintenance -> [Optimise Database] (Daily)
|
||||
|
||||
# The following is the syntax used for plugins. A plugin task requires the plugin name for the [task] field, and the plugin-ID for the [pluginId] field.
|
||||
{"task" : "Create Tags", "pluginId" : "pathParser", "hours" : 0}, # This task requires plugin [Path Parser]. To enable this task change the zero to a positive number.
|
||||
|
||||
# Note: For a weekly task use the weekday method which is more reliable. The hour section in time MUST be a two digit number, and use military time format. Example: 1PM = "13:00"
|
||||
{"task" : "Generate", "weekday" : "sunday", "time" : "07:00"}, # Generated Content-> [Generate] (Every Sunday at 7AM)
|
||||
{"task" : "Scan", "weekday" : "sunday", "time" : "03:00"}, # Library -> [Scan] (Weekly) (Every Sunday at 3AM)
|
||||
|
||||
# To perform a task monthly, specify the day of the month as in the weekly schedule format, and add a monthly field.
|
||||
# The monthly field value must be 1, 2, 3, or 4.
|
||||
# 1 = 1st specified weekday of the month. Example 1st monday.
|
||||
# 2 = 2nd specified weekday of the month. Example 2nd monday of the month.
|
||||
# 3 = 3rd specified weekday of the month.
|
||||
# 4 = 4th specified weekday of the month.
|
||||
# Example monthly method.
|
||||
{"task" : "Backup", "weekday" : "saturday", "time" : "02:30", "monthly" : 2}, # Backup -> [Backup] 2nd saturday of the month at 2:30AM
|
||||
|
||||
# The following is a place holder for a plugin.
|
||||
{"task" : "PluginButtonName_Here", "pluginId" : "PluginId_Here", "hours" : 0}, # The zero frequency value makes this task disabled.
|
||||
# Add additional plugin task here.
|
||||
],
|
||||
|
||||
# Maximum backups to keep. When scheduler is enabled, and the Backup runs, delete older backups after reaching maximum backups.
|
||||
"BackupsMax" : 6, # Not yet implemented!!!
|
||||
|
||||
# When enabled, if CREATE flag is triggered, DupFileManager task is called if the plugin is installed.
|
||||
"onCreateCallDupFileManager": False, # Not yet implemented!!!!
|
||||
|
||||
# The following fields are ONLY used when running FileMonitor in script mode
|
||||
"endpoint_Scheme" : "http", # Define endpoint to use when contacting the Stash server
|
||||
"endpoint_Host" : "0.0.0.0", # Define endpoint to use when contacting the Stash server
|
||||
"endpoint_Port" : 9999, # Define endpoint to use when contacting the Stash server
|
||||
}
|
||||
@@ -1,3 +0,0 @@
|
||||
stashapp-tools >= 0.2.49
|
||||
pyYAML
|
||||
watchdog
|
||||
Reference in New Issue
Block a user