sgn.subprocess
¶
Parallelize
¶
Bases: SignalEOS
flowchart TD
sgn.subprocess.Parallelize[Parallelize]
sgn.sources.SignalEOS[SignalEOS]
sgn.sources.SignalEOS --> sgn.subprocess.Parallelize
click sgn.subprocess.Parallelize href "" "sgn.subprocess.Parallelize"
click sgn.sources.SignalEOS href "" "sgn.sources.SignalEOS"
A context manager for running SGN pipelines with elements that implement separate processes or threads.
This class manages the lifecycle of workers (processes or threads) in an SGN pipeline, handling worker creation, execution, and cleanup. It also supports shared memory objects that will be automatically cleaned up on exit through the to_shm() method (only applicable for process mode).
Key features include: - Automatic management of worker lifecycle (creation, starting, joining, cleanup) - Shared memory management for efficient data sharing (process mode only) - Signal handling coordination between main process/thread and workers - Resilience against KeyboardInterrupt (Ctrl+C) - workers catch and ignore these signals, allowing the main process to coordinate a clean shutdown - Orderly shutdown to ensure all resources are properly released - Support for both multiprocessing and threading concurrency models - Automatic detection and invocation when pipeline.run() is called
IMPORTANT: When using process mode, code using Parallelize MUST be wrapped within an if name == "main": block. This is required because SGN uses Python's multiprocessing module with the 'spawn' start method, which requires that the main module be importable.
Example with automatic parallelization (RECOMMENDED): def main(): pipeline = Pipeline() # Add ParallelizeTransformElement, ParallelizeSinkElement, etc. pipeline.run() # Automatically detects and enables parallelization
if __name__ == "__main__":
main()
Example with manual context manager (LEGACY): def main(): pipeline = Pipeline() with Parallelize(pipeline) as parallelize: parallelize.run()
if __name__ == "__main__":
main()
Source code in sgn/subprocess.py
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 | |
__init__(pipeline=None, use_threading=None)
¶
Initialize the Parallelize context manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pipeline
|
The pipeline to run |
None
|
|
use_threading
|
bool | None
|
Whether to use threading instead of multiprocessing. If not specified, uses the use_threading_default |
None
|
Source code in sgn/subprocess.py
needs_parallelization(pipeline)
staticmethod
¶
Check if a pipeline contains any elements that require parallelization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pipeline
|
The Pipeline instance to check |
required |
Returns:
| Name | Type | Description |
|---|---|---|
bool |
True if the pipeline contains any Parallelize* elements |
Source code in sgn/subprocess.py
run()
¶
Run the pipeline managed by this Parallelize instance.
This method executes the associated pipeline and ensures proper cleanup of worker resources, even in the case of exceptions. It signals all workers to stop when the pipeline execution completes or if an exception occurs.
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If an exception occurs during pipeline execution |
AssertionError
|
If no pipeline was provided to the SubProcess |
Source code in sgn/subprocess.py
to_shm(name, bytez, **kwargs)
staticmethod
¶
Create a shared memory object that can be accessed by subprocesses.
Note: This is only applicable in process mode. In thread mode, shared memory is not necessary since threads share the same address space.
This method creates a shared memory segment that will be automatically cleaned up when the Parallelize context manager exits. The shared memory can be used to efficiently share large data between processes without serialization overhead.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Unique identifier for the shared memory block |
required |
bytez
|
bytes or bytearray
|
Data to store in shared memory |
required |
**kwargs
|
Additional metadata to store with the shared memory reference |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
dict |
A dictionary containing the shared memory object and metadata with keys: - "name": The name of the shared memory block - "shm": The SharedMemory object - Any additional key-value pairs from kwargs |
Raises:
| Type | Description |
|---|---|
FileExistsError
|
If shared memory with the given name already exists |
Example
shared_data = bytearray("Hello world", "utf-8") shm_ref = SubProcess.to_shm("example_data", shared_data)
Source code in sgn/subprocess.py
ParallelizeSinkElement
dataclass
¶
Bases: SinkElement, _ParallelizeBase, Parallelize
flowchart TD
sgn.subprocess.ParallelizeSinkElement[ParallelizeSinkElement]
sgn.base.SinkElement[SinkElement]
sgn.base.ElementLike[ElementLike]
sgn.base.UniqueID[UniqueID]
sgn.subprocess._ParallelizeBase[_ParallelizeBase]
sgn.subprocess.Parallelize[Parallelize]
sgn.sources.SignalEOS[SignalEOS]
sgn.base.SinkElement --> sgn.subprocess.ParallelizeSinkElement
sgn.base.ElementLike --> sgn.base.SinkElement
sgn.base.UniqueID --> sgn.base.ElementLike
sgn.subprocess._ParallelizeBase --> sgn.subprocess.ParallelizeSinkElement
sgn.subprocess.Parallelize --> sgn.subprocess._ParallelizeBase
sgn.sources.SignalEOS --> sgn.subprocess.Parallelize
sgn.subprocess.Parallelize --> sgn.subprocess.ParallelizeSinkElement
sgn.sources.SignalEOS --> sgn.subprocess.Parallelize
click sgn.subprocess.ParallelizeSinkElement href "" "sgn.subprocess.ParallelizeSinkElement"
click sgn.base.SinkElement href "" "sgn.base.SinkElement"
click sgn.base.ElementLike href "" "sgn.base.ElementLike"
click sgn.base.UniqueID href "" "sgn.base.UniqueID"
click sgn.subprocess._ParallelizeBase href "" "sgn.subprocess._ParallelizeBase"
click sgn.subprocess.Parallelize href "" "sgn.subprocess.Parallelize"
click sgn.sources.SignalEOS href "" "sgn.sources.SignalEOS"
A Sink element that runs data consumption logic in a separate process or thread.
This class extends the standard SinkElement to execute its processing in a separate worker (process or thread). It communicates with the main process/thread through input and output queues, and manages the worker lifecycle. Subclasses must implement the worker_process method to define the consumption logic that runs in the worker.
The design intentionally avoids passing class or instance references to the worker to prevent pickling issues when using process mode. Instead, it passes all necessary data and resources via function arguments.
The implementation includes special handling for KeyboardInterrupt signals. When Ctrl+C is pressed in the terminal, workers will catch and ignore the KeyboardInterrupt, allowing them to continue processing while the main process coordinates a graceful shutdown. This prevents data loss and ensures all resources are properly cleaned up.
Attributes:
| Name | Type | Description |
|---|---|---|
queue_maxsize |
int
|
Maximum size of the communication queues |
err_maxsize |
int
|
Maximum size for error data |
_use_threading_override |
bool
|
Set to True to use threading or False to use multiprocessing. If not specified, uses the Parallelize.use_threading_default |
Example with default process mode
@dataclass class MyLoggingSinkElement(ParallelizeSinkElement): def pull(self, pad, frame): if frame.EOS: self.mark_eos(pad) # Send the frame to the worker self.in_queue.put((pad.name, frame))
def worker_process(self, context: WorkerContext):
try:
# Get data from the main process/thread
pad_name, frame = context.input_queue.get(timeout=0.1)
# Process or log the data
if not frame.EOS:
print(f"Sink received on {pad_name}: {frame.data}")
else:
print(f"Sink received EOS on {pad_name}")
except queue.Empty:
pass
Example with thread mode
@dataclass class MyThreadedSinkElement(ParallelizeSinkElement): _use_threading_override = True # Implementation same as above
Source code in sgn/subprocess.py
ParallelizeSourceElement
dataclass
¶
Bases: SourceElement, _ParallelizeBase, Parallelize
flowchart TD
sgn.subprocess.ParallelizeSourceElement[ParallelizeSourceElement]
sgn.base.SourceElement[SourceElement]
sgn.base.ElementLike[ElementLike]
sgn.base.UniqueID[UniqueID]
sgn.subprocess._ParallelizeBase[_ParallelizeBase]
sgn.subprocess.Parallelize[Parallelize]
sgn.sources.SignalEOS[SignalEOS]
sgn.base.SourceElement --> sgn.subprocess.ParallelizeSourceElement
sgn.base.ElementLike --> sgn.base.SourceElement
sgn.base.UniqueID --> sgn.base.ElementLike
sgn.subprocess._ParallelizeBase --> sgn.subprocess.ParallelizeSourceElement
sgn.subprocess.Parallelize --> sgn.subprocess._ParallelizeBase
sgn.sources.SignalEOS --> sgn.subprocess.Parallelize
sgn.subprocess.Parallelize --> sgn.subprocess.ParallelizeSourceElement
sgn.sources.SignalEOS --> sgn.subprocess.Parallelize
click sgn.subprocess.ParallelizeSourceElement href "" "sgn.subprocess.ParallelizeSourceElement"
click sgn.base.SourceElement href "" "sgn.base.SourceElement"
click sgn.base.ElementLike href "" "sgn.base.ElementLike"
click sgn.base.UniqueID href "" "sgn.base.UniqueID"
click sgn.subprocess._ParallelizeBase href "" "sgn.subprocess._ParallelizeBase"
click sgn.subprocess.Parallelize href "" "sgn.subprocess.Parallelize"
click sgn.sources.SignalEOS href "" "sgn.sources.SignalEOS"
A Source element that generates data in a separate process or thread.
This class extends the standard SourceElement to execute its data generation logic in a separate worker (process or thread). It communicates with the main process through output queues, and manages the worker lifecycle. Subclasses must implement the worker_process method to define the data generation logic that runs in the worker.
The design intentionally avoids passing class or instance references to the worker to prevent pickling issues when using process mode. Instead, it passes all necessary data and resources via function arguments.
The implementation includes special handling for KeyboardInterrupt signals. When Ctrl+C is pressed in the terminal, workers will catch and ignore the KeyboardInterrupt, allowing them to continue processing while the main process coordinates a graceful shutdown. This prevents data loss and ensures all resources are properly cleaned up.
Attributes:
| Name | Type | Description |
|---|---|---|
queue_maxsize |
int
|
Maximum size of the communication queues |
err_maxsize |
int
|
Maximum size for error data |
frame_factory |
Callable
|
Function to create Frame objects |
at_eos |
bool
|
Flag indicating if End-Of-Stream has been reached |
_use_threading_override |
bool
|
Set to True to use threading or False to use multiprocessing. If not specified, uses the Parallelize.use_threading_default |
Example with default process mode
@dataclass class MyDataSourceElement(ParallelizeSourceElement): def post_init(self): super().post_init() # Dictionary to track EOS status for each pad self.pad_eos = {pad.name: False for pad in self.source_pads}
def new(self, pad):
# Check if this pad has already reached EOS
if self.pad_eos[pad.name]:
return Frame(data=None, EOS=True)
try:
# Get data generated by the worker
# In a real implementation, you might use pad-specific queues
# or have the worker send pad-specific data
data = self.out_queue.get(timeout=1)
# Check for EOS signal (None typically indicates EOS)
if data is None:
self.pad_eos[pad.name] = True
# If all pads have reached EOS, set global EOS flag
if all(self.pad_eos.values()):
self.at_eos = True
return Frame(data=None, EOS=True)
# For data intended for other pads, you might implement
# custom routing logic here
return Frame(data=data)
except queue.Empty:
# Return an empty frame if no data is available
return Frame(data=None)
def worker_process(self, context: WorkerContext):
# Generate data and send it back to the main process/thread
for i in range(10):
if context.should_stop():
break
context.output_queue.put(f"Generated data {i}")
time.sleep(0.5)
# Signal end of stream with None
context.output_queue.put(None)
# Wait for worker_stop before terminating
# This prevents "worker stopped before EOS" errors
while not context.should_stop():
time.sleep(0.1)
Example with thread mode
@dataclass class MyThreadedSourceElement(ParallelizeSourceElement): _use_threading_override = True
def __post_init__(self):
super().__post_init__()
# Dictionary to track EOS status for each pad
self.pad_eos = {pad.name: False for pad in self.source_pads}
def new(self, pad):
# Similar implementation as in the process mode example,
# but might use threading-specific features if needed
if self.pad_eos[pad.name]:
return Frame(data=None, EOS=True)
# Rest of implementation same as the process mode example
Source code in sgn/subprocess.py
872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 | |
ParallelizeTransformElement
dataclass
¶
Bases: TransformElement, _ParallelizeBase, Parallelize
flowchart TD
sgn.subprocess.ParallelizeTransformElement[ParallelizeTransformElement]
sgn.base.TransformElement[TransformElement]
sgn.base.ElementLike[ElementLike]
sgn.base.UniqueID[UniqueID]
sgn.subprocess._ParallelizeBase[_ParallelizeBase]
sgn.subprocess.Parallelize[Parallelize]
sgn.sources.SignalEOS[SignalEOS]
sgn.base.TransformElement --> sgn.subprocess.ParallelizeTransformElement
sgn.base.ElementLike --> sgn.base.TransformElement
sgn.base.UniqueID --> sgn.base.ElementLike
sgn.subprocess._ParallelizeBase --> sgn.subprocess.ParallelizeTransformElement
sgn.subprocess.Parallelize --> sgn.subprocess._ParallelizeBase
sgn.sources.SignalEOS --> sgn.subprocess.Parallelize
sgn.subprocess.Parallelize --> sgn.subprocess.ParallelizeTransformElement
sgn.sources.SignalEOS --> sgn.subprocess.Parallelize
click sgn.subprocess.ParallelizeTransformElement href "" "sgn.subprocess.ParallelizeTransformElement"
click sgn.base.TransformElement href "" "sgn.base.TransformElement"
click sgn.base.ElementLike href "" "sgn.base.ElementLike"
click sgn.base.UniqueID href "" "sgn.base.UniqueID"
click sgn.subprocess._ParallelizeBase href "" "sgn.subprocess._ParallelizeBase"
click sgn.subprocess.Parallelize href "" "sgn.subprocess.Parallelize"
click sgn.sources.SignalEOS href "" "sgn.sources.SignalEOS"
A Transform element that runs processing logic in a separate process or thread.
This class extends the standard TransformElement to execute its processing in a separate worker (process or thread). It communicates with the main process/thread through input and output queues, and manages the worker lifecycle. Subclasses must implement the worker_process method to define the processing logic that runs in the worker.
The design intentionally avoids passing class or instance references to the worker to prevent pickling issues when using process mode. Instead, it passes all necessary data and resources via function arguments.
The implementation includes special handling for KeyboardInterrupt signals. When Ctrl+C is pressed in the terminal, workers will catch and ignore the KeyboardInterrupt, allowing them to continue processing while the main process coordinates a graceful shutdown. This prevents data loss and ensures all resources are properly cleaned up.
Attributes:
| Name | Type | Description |
|---|---|---|
queue_maxsize |
int
|
Maximum size of the communication queues |
err_maxsize |
int
|
Maximum size for error data |
at_eos |
bool
|
Flag indicating if End-Of-Stream has been reached |
_use_threading_override |
bool
|
Set to True to use threading or False to use multiprocessing. If not specified, uses the Parallelize.use_threading_default |
Example with default process mode
@dataclass class MyProcessingElement(ParallelizeTransformElement): multiplier: int = 2 # Instance attributes become worker parameters
def pull(self, pad, frame):
# Send the frame to the worker
self.in_queue.put(frame)
if frame.EOS:
self.at_eos = True
def worker_process(self, context: WorkerContext, multiplier: int):
# Process data in the worker using the clean context
try:
frame = context.input_queue.get(timeout=0.1)
if frame and not frame.EOS:
frame.data *= multiplier
context.output_queue.put(frame)
except queue.Empty:
pass
def new(self, pad):
# Get processed data from the worker
return self.out_queue.get()
Example with thread mode
@dataclass class MyThreadedElement(ParallelizeTransformElement): _use_threading_override = True # Implementation same as above
Example
@dataclass class MyProcessingElement(ParallelizeTransformElement): multiplier: int = 2 threshold: float = 0.5
def pull(self, pad, frame):
self.in_queue.put(frame)
if frame.EOS:
self.at_eos = True
def worker_process(
self, context: WorkerContext, multiplier: int, threshold: float
):
try:
frame = context.input_queue.get(timeout=0.1)
if frame and not frame.EOS and frame.data > threshold:
frame.data *= multiplier
context.output_queue.put(frame)
except queue.Empty:
pass
def new(self, pad):
return self.out_queue.get()
Source code in sgn/subprocess.py
712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 | |
QueueProtocol
¶
Bases: Protocol
flowchart TD
sgn.subprocess.QueueProtocol[QueueProtocol]
click sgn.subprocess.QueueProtocol href "" "sgn.subprocess.QueueProtocol"
Protocol defining a common Queue interface.
Source code in sgn/subprocess.py
QueueWrapper
¶
A wrapper that provides a unified interface for both Queue implementations.
This abstraction handles the differences between multiprocessing.Queue and queue.Queue APIs, specifically providing no-op implementations for multiprocessing-specific methods when wrapping a queue.Queue.
Source code in sgn/subprocess.py
WorkerContext
¶
Context object passed to worker methods with clean access to resources.