Logistic Map Approximator (Neural Network)
This model approximates the logistic map equation:
xβββ = r Γ xβ Γ (1 β xβ)
It is trained using a simple feedforward neural network to learn chaotic dynamics across different values of r β [2.5, 4.0].
Model Details
- Framework: PyTorch
- Input:
xβ [0, 1]rβ [2.5, 4.0]
- Output:
x_next(approximation of the next value in sequence) - Loss Function: Mean Squared Error (MSE)
- Architecture: 2 hidden layers (ReLU), trained for 100 epochs
Performance
The model closely approximates x_next for a wide range of r values, including the chaotic regime.
Files
logistic_map_approximator.pth: Trained PyTorch model weightsmandelbrot.py: Full training and evaluation codeREADME.md: You're reading itexample_plot.png: Comparison of true vs predicted outputs
Applications
- Chaos theory visualizations
- Educational tools on non-linear dynamics
- Function approximation benchmarking
License
MIT License
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support