Rename README.md to model_overview.md
Browse files
README.md → model_overview.md
RENAMED
|
@@ -170,47 +170,6 @@ This model is not tested or intended for use in mission critical applications th
|
|
| 170 |
|
| 171 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 172 |
|
| 173 |
-
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.
|
| 174 |
|
| 175 |
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
| 176 |
-
|
| 177 |
-
# Bias
|
| 178 |
-
Field | Response
|
| 179 |
-
:---------------------------------------------------------------------------------------------------|:---------------
|
| 180 |
-
Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | Not Applicable
|
| 181 |
-
Bias Metric (If Measured): | Not Applicable
|
| 182 |
-
Measures taken to mitigate against unwanted bias: | Not Applicable
|
| 183 |
-
|
| 184 |
-
# Explainability
|
| 185 |
-
Field | Response
|
| 186 |
-
:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
|
| 187 |
-
Intended Task/Domain: | Robotic Manipulation
|
| 188 |
-
Model Type: | Denoising Diffusion Probabilistic Model
|
| 189 |
-
Intended Users: | Roboticists and researchers in academia and industry who are interested in robot manipulation research
|
| 190 |
-
Output: | Actions consisting of end-effector poses, gripper states and head orientation.
|
| 191 |
-
Describe how the model works: | ``mindmap`` is a Denoising Diffusion Probabilistic Model that samples robot trajectories conditioned on sensor observations and a 3D reconstruction of the environment.
|
| 192 |
-
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
|
| 193 |
-
Technical Limitations & Mitigation: | - Limitation: This policy is only effective in the exact simulation environment in which it was trained. Mitigation: Recommended to retrain the model in new simulation environments. - Limitation: The policy was not tested on a physical robot and likely only works in simulation. Mitigation: Expand training, testing and validation on physical robot platforms.
|
| 194 |
-
Verified to have met prescribed NVIDIA quality standards: | Yes
|
| 195 |
-
Performance Metrics: | Closed loop success rate on simulated robotic manipulation tasks.
|
| 196 |
-
Potential Known Risks: | The model might be susceptible to rendering changes on the simulation tasks it was trained on.
|
| 197 |
-
Licensing: | [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/)
|
| 198 |
-
|
| 199 |
-
# Safety and Security
|
| 200 |
-
Field | Response
|
| 201 |
-
:---------------------------------------------------|:----------------------------------
|
| 202 |
-
Model Application Field(s): | Robotics
|
| 203 |
-
Use Case Restrictions: | Abide by [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/)
|
| 204 |
-
Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.
|
| 205 |
-
|
| 206 |
-
# Privacy
|
| 207 |
-
Field | Response
|
| 208 |
-
:----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
|
| 209 |
-
Generatable or reverse engineerable personal data? | No
|
| 210 |
-
Personal data used to create this model? | No
|
| 211 |
-
How often is dataset reviewed? | Before Release
|
| 212 |
-
Is there provenance for all datasets used in training? | Yes
|
| 213 |
-
Does data labeling (annotation, metadata) comply with privacy laws? | Yes
|
| 214 |
-
Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
|
| 215 |
-
Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy)
|
| 216 |
-
|
|
|
|
| 170 |
|
| 171 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 172 |
|
| 173 |
+
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](model_explainability.md), [Bias](model_bias.md), [Safety & Security](model_safety.md), and [Privacy](model_privacy.md) Subcards.
|
| 174 |
|
| 175 |
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|