- Requirements
- Getting Started
- Overview
- Features
- Usage
- Output Example
- Future Updates
- Feedback
This project mandates the use of
Python 3.7
or later versions. Compatibility issues have been identified with the use for dataclasses inPython 3.6
and earlier versions.
- Clone the repository.
- Install the required dependencies
pip install -r requirements.txt
pip install dynamic-loader
pip install -r requirements.txt
The DataLoader project is a comprehensive utility that facilitates the efficient loading and processing of data from specified directories. This project is designed to be user-friendly and easy to integrate into your projects.
The DataMetrics class focuses on processing data paths and gathering statistics related to the file system and specified paths. Also allows the ability to export all statistics to a JSON file.
The Extensions class is a utility that provides a set of default file extensions for the DataLoader
class. Its the back-bone for mapping all file extensions to its respective loading method.
The DataLoader
class is specifically designed for loading and processing data from directories. It provides the following key features:
- Dynamic Loading: Load files from a single directory or merge files from multiple directories.
- Flexible Configuration: Set various parameters, such as default file extensions, full POSIX paths, method loader execution, and more.
-
Parallel Execution: Leverage parallel execution with the
total_workers
parameter to enhance performance. -
Verbose Output: Display verbose output to track the loading process.
- If enabled, the
verbose
parameter will display the loading process for each file. - If disabled, the
verbose
parameter will write the loading process for each file to a log file.
- If enabled, the
-
Custom Loaders: Implement custom loaders for specific file extensions.
- Please note that at the moment, the loading methods kwargs will be uniformly applied to all files with the specified extension.
- Additionally, the first parameter of the loader method is automatically passed and should be skipped. If passed, the loader will fail and return the contents of the file as
TextIOWrapper
.
Future updates will include the ability to specify what loader method to use for a specific files efficiently.
-
path
(str or Path): The path of the directory from which to load files. -
directories
(Iterable): An iterable of directories from which to all files. -
default_extensions
(Iterable): Default file extensions to be processed. -
full_posix
(bool): Indicates whether to display full POSIX paths. -
no_method
(bool): Indicates whether to skip loading method matching execution. -
verbose
(bool): Indicates whether to display verbose output. -
generator
(bool): Indicates whether to return the loaded files as a generator; otherwise, returns as a dictionary. -
total_workers
(int): Number of workers for parallel execution. -
log
(Logger): A configured logger instance for logging messages. (Refer to the GetLogger class for more information on how to create a logger instance using theGetLogger
class.) -
ext_loaders
(dict[str, Any, dict[key-value]]): Dictionary containing extensions mapped to specified loaders. (Refer to the Extensions class for more information)
-
load_file
(class_method): Load a specific file. -
get_files
(class_method): Retrieve files from a directory based on default extensions and filters unwanted files. -
dir_files
(property): Loaded files from specified directories. -
files
(property): Loaded files from a single directory -
all_exts
(property): Retrieve all supported file extensions with their respective loader methods being used. -
EXTENSIONS
(Extensions class instance): Retrieve all default supported file extensions with their respective loader methods.
The DataMetrics
class focuses on processing data paths and gathering statistics related to the file system. Key features include:
- OS Statistics: Retrieve detailed statistics for each path, including symbolic link status, calculated size, and size in bytes.
- Export to JSON: Export all statistics to a JSON file for further analysis and visualization.
-
paths
(Iterable): Paths for which to gather statistics. -
file_name
(str): The file name to be used when exporting all files metadata stats. -
full_posix
(bool): Indicates whether to display full POSIX paths.
-
all_stats
: Retrieve statistics for all paths. -
total_size
: Calculate the total size of all paths. -
total_files
: Calculate the total number of files in all paths. -
export_stats()
: Export all statistics to a JSON file.
-
os_stats_results
: OS statistics results for each path. - Custom Stats:
-
st_fsize
: Full file size statistics. -
st_vsize
: Full volume size statistics.
-
The Extensions
class is a utility that provides a set of default file extensions for the DataLoader
class. Its the back-bone for mapping all file extensions to its respective loading method. All extensions are stored in a dictionary (no period included), and the Extensions
class provides the following key features:
- File Extension Mapping: Retrieve all supported file extensions with their respective loader methods.
- Loader Method Retrieval: Retrieve the loader method for a specific file extension.
-
Loader Method Check: Check if a specific file extension has a loader method implemented that's not
open
. - Supported Extension Check: Check if a specific file extension is supported.
-
Customization: Customize the
Extensions
class with new files extensions and its respective loader methods.
- No parameters are required for the
Extensions
class. -
Extensions()
: Initializes theExtensions
class with all implemented file extensions and their respective loader methods.- Acts as a dictionary for accessing supported file extensions and their loader methods via Extensions().ALL_EXTS.
-
ALL_EXTS
: Retrieve all supported file extensions with their respective loader methods. -
get_loader
: Retrieve the loader method for a specific file extension. -
has_loader
: Checks if a specific file extension has a loader method implemented thats notopen
. -
is_supported
: Checks if a specific file extension is supported. -
customize
: Customize theExtensions
class with new files extensions and its respective loader methods.- Specified loading method will be converted to a lambda function to support kwargs.
- The first parameter of the loader method is automatically passed and should be skipped. If passed, the loader will fail and return the contents of the file as
TextIOWrapper
. - Future updates will include the ability to specify what loader method to use for a specific files efficiently.
- The loader method kwargs will be uniformly applied to all files with the specified extension.
- Example:
# Structure: {extension: {loader_method: {kwargs}}} ext_loaders = {"csv": {pd.read_csv: {"header": 10}}}
The GetLogger
class is a utility that provides a method to get a configured logger instance for logging messages. It is designed to be user-friendly and easy to integrate into your projects.
-
name
(str, optional): The name of the logger. Defaults to the name of the calling module. -
level
(int, optional): The logging level. Defaults to logging.DEBUG. -
formatter_kwgs
(dict, optional): Additional keyword arguments for the log formatter. -
handler_kwgs
(dict, optional): Additional keyword arguments for the log handler. -
mode
(str, optional): The file mode for opening the log file. Defaults to "a" (append).
-
refresher
(callable): A method to refresh the log file. -
set_verbose
(callable): A method to set the verbosity of the logger.
- Logger: A configured logger instance.
- This function sets up a logger with a file handler and an optional stream (console) handler for verbose logging.
- If
verbose
is True, log messages will be printed to the console instead of being written to a file.
from data_loader import DataLoader
# Load all files with a specified path (directory) as a Generator
dl_gen = DataLoader(path="path/to/directory")
dl_files_gen = dl_gen.files
print(dl_files_gen)
# Output:
# <generator object DataLoader.files.<key-value> at 0x1163f4ba0>
from data_loader import DataLoader
# Load all files with a specified path (directory) as a Dictionary (Custom-Repr)
# Disabling 'generator' and 'full_posix' for displaying purposes.
dl_dict = DataLoader(path="path/to/directory", generator=False, full_posix=False)
dl_files_dict = dl_dict.files
print(dl_files_dict)
# Output:
# DataLoader((LICENSE.md, <TextIOWrapper>),
# (requirements.txt, <Str>),
# (Makefile, <Str>),
# ...
# (space_4.txt, <Str>))
from data_loader import DataLoader
# Load all files from multiple directories
# Disabling 'generator' and 'full_posix' for displaying purposes.
dl = DataLoader(directories=["path/to/dir1", "path/to/dir2"], generator=False, full_posix=False)
dl_dir_files = dl.dir_files
print(dl_dir_files)
# Output:
# DataLoader((file1.txt, <Str>),
# (file2.txt, <Str>),
# (file3.txt, <Str>),
# ...
# (fileN.txt, <Str>))
from data_loader import DataLoader
# Load all files with default extensions
dl_default = DataLoader(path="path/to/directory", default_extensions=["csv"], generator=False, full_posix=False)
dl_default_files = dl_default.files
print(dl_default_files)
# Output:
# DataLoader((file1.csv, <DataFrame>),
# (file2.csv, <DataFrame>),
# ...
# (fileN.csv, <DataFrame>))
from data_loader import DataLoader
# Retrieve data for a specific file
dl_files = DataLoader(path="path/to/directory", generator=False, full_posix=False).files
dl_specific_file_data = dl_files["file1.csv"]
# Output:
# <DataFrame>
from data_loader import DataLoader
import pandas as pd
# Specify your own custom loader methods
dl_custom = DataLoader(path="path/to/directory", ext_loaders={"csv": {pd.read_csv: {"nrows": 10}}}, generator=False, full_posix=False)
dl_custom_files = dl_custom.files
print(dl_custom_files)
# Output:
# DataLoader((file1.csv, <DataFrame>),
# (file2.csv, <DataFrame>),
# ...
# (fileN.csv, <DataFrame>))
# Note: The 'nrows' will be dynamically passed to the 'pd.read_csv' method for each file.
from data_loader import DataLoader
import logging
# Specify your own custom logger
custom_logger = logging.getLogger("DataLoader")
dl_with_logger = DataLoader(path="path/to/directory", log=custom_logger)
dl_logger_files = dl_with_logger.files
print(dl_logger_files)
# Output:
# <generator object DataLoader.files.<key-value> at 0x1163f4ba0>
# Note: The logger will be used to log or stream messages.
from data_loader import DataMetrics
# Retrieve statistics for all paths
dm = DataMetrics(files=["path/to/directory1", "path/to/directory2"])
print(dm.all_stats) # Retrieve statistics for all paths
# Calculate the total size of all paths
print(dm.total_size) # Calculate the total size of all paths
# Calculate the total number of files in all paths
print(dm.total_files) # Calculate the total number of files in all paths
dm.export_stats() # Export all statistics to a JSON file
from data_loader import Extensions
ALL_EXTS = Extensions() # Initializes the Extensions class or use the default instance Extensions().ALL_EXTS
print("csv" in ALL_EXTS) # True
print(ALL_EXTS.get_loader("csv")) # <function read_csv at 0x7f8e3e3e3d30>
# or
print(ALL_EXTS.get_loader(".pickle")) # <function read_csv at 0x7f8e3e3e3d30>
print(ALL_EXTS.has_loader("docx")) # False
print(ALL_EXTS.is_supported("docx")) # True
ALL_EXTS.customize({"docx": {open: {mode="rb"}},
"png": {PIL.Image.open: {}}}) # Customize the Extensions class with a new file extension and loader method
print(ALL_EXTS.get_loader("docx")) # <function <lambda> at 0x7f8e3e3e3d30>
# Create a logger with default settings
from data_loader import GetLogger
logger = GetLogger().logger
logger.info("This is an info message") # Writes to the log file
# Create a logger with custom settings
logger = GetLogger(name='custom_logger', level=logging.INFO, verbose=True).logger
logger.info("This is an info message") # Prints to the console
# Initiate verbosity
logger = GetLogger().logger
logger.set_verbose(True)
CustomException("Error Message") # Prints to the console
# Disable verbosity
logger.set_verbose(False).logger
CustomException("Error Message") # Writes to the log file
from data_metrics import DataMetrics
# Create a DataMetrics instance with paths and corresponding metadata
dm = DataMetrics(("path/to/directory1", <Dict>),
("path/to/directory2", <Dict>))
# Access metadata for a specific path
metadata_directory1 = dm["path/to/directory1"]
print(metadata_directory1)
# Output:
# {'os_stats_results': <os_stats_results>,
# 'st_fsize': Stats(symbolic='6.20 KB', calculated_size=6.19921875, bytes_size=6348),
# 'st_vsize': {'total': Stats(symbolic='465.63 GB (Gigabytes)', calculated_size=465.62699127197266, bytes_size=499963174912),
# 'used': Stats(symbolic='131.60 GB (Gigabytes)', calculated_size=131.59552001953125, bytes_size=141299613696),
# 'free': Stats(symbolic='334.03 GB (Gigabytes)', calculated_size=334.0314712524414, bytes_size=358663561216)}}
# Export all statistics to a JSON file
dm.export_stats(file_path="all_metadata_stats.json")
# Calculate the total size of all paths
total_size = dm.total_size
print(total_size)
# Output:
# Stats(symbolic='471.76 GB (Gigabytes)', calculated_size=471.75720977783203, bytes_size=507012679260)
# Calculate the total number of files in all paths
total_files = dm.total_files
print(total_files)
# Output:
# 215
- Include the ability to specify loader methods for individual files, providing greater flexibility.
- Intend to add an option for special representation of loaded files, displaying all contents rather than just the data type.
-
Add more comprehensive tests covering all implemented features.
-
Include specific tests for the
ext_loaders
parameter.
-
Include specific tests for the
-
Add loading method keyword argument support for the
load_file
class method.- Implement a more efficient method for specifying loader methods kwargs for specific files rather than applying them uniformly.
Feedback is crucial for the improvement of the DataLoader
project. If you encounter any issues, have suggestions, or want to share your experience, please consider the following channels:
-
GitHub Issues: Open an issue on the GitHub repository to report bugs or suggest enhancements.
-
Contact: Reach out to the project maintainer via the following:
Your feedback and contributions play a significant role in making the
DataLoader
project more robust and valuable for the community. Thank you for being part of this endeavor!