Welcome to deforce’s documentation!¶
deforce (Metaheuristic-optimized Multi-Layer Perceptron) is a Python library that implements variants and the traditional version of Multi-Layer Perceptron models. These include Metaheuristic-optimized MLP models (GA, PSO, WOA, TLO, DE, …) and Gradient Descent-optimized MLP models (SGD, Adam, Adelta, Adagrad, …). It provides a comprehensive list of optimizers for training MLP models and is also compatible with the Scikit-Learn library. With deforce, you can perform searches and hyperparameter tuning using the features provided by the Scikit-Learn library.
Free software: GNU General Public License (GPL) V3 license
Provided Estimator: CfnRegressor, CfnClassifier, DfoCfnRegressor, DfoCfnClassifier
Total Metaheuristic-based MLP Regressor: > 200 Models
Total Metaheuristic-based MLP Classifier: > 200 Models
Total Gradient Descent-based MLP Regressor: 12 Models
Total Gradient Descent-based MLP Classifier: 12 Models
Supported performance metrics: >= 67 (47 regressions and 20 classifications)
Supported objective functions (as fitness functions or loss functions): >= 67 (47 regressions and 20 classifications)
Documentation: https://deforce.readthedocs.io
Python versions: >= 3.8.x
Dependencies: numpy, scipy, scikit-learn, pandas, mealpy, permetrics, torch, skorch
- Installation
- Examples
- 1) deforce provides several useful classes
- 2) What you can do with DataTransformer class
- 3) What can you do with Data class
- 4) What can you do with all model classes
- 5) What can you do with model object
- 6) Combine deforce library like a normal library with scikit-learn:
- 7) Utilities everything that deforce provided: