nohup: ignoring input /data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:70: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead. self.scaler = GradScaler() /data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:116: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. self.embeddings = torch.load(combined_path, map_location=self.device) /data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:180: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. self.compressor.load_state_dict(torch.load('final_compressor_model.pth', map_location=self.device)) /data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:181: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. self.decompressor.load_state_dict(torch.load('final_decompressor_model.pth', map_location=self.device)) /data2/edwardsun/flow_home/cfg_dataset.py:253: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. self.embeddings = torch.load(combined_path, map_location='cpu') Starting optimized training with batch_size=384, epochs=2000 Using GPU 0 for optimized H100 training Mixed precision: True Batch size: 384 Target epochs: 2000 Learning rate: 0.0012 -> 0.0006 ✓ Mixed precision training enabled (BF16) Loading ALL AMP embeddings from /data2/edwardsun/flow_project/peptide_embeddings/... Loading combined embeddings from /data2/edwardsun/flow_project/peptide_embeddings/all_peptide_embeddings.pt... ✓ Loaded ALL embeddings: torch.Size([17968, 50, 1280]) Computing preprocessing statistics... ✓ Statistics computed and saved: Total embeddings: 17,968 Mean: -0.0005 ± 0.0897 Std: 0.0869 ± 0.1168 Range: [-9.1738, 3.2894] Initializing models... ✓ Model compiled with torch.compile for speedup ✓ Models initialized: Compressor parameters: 78,817,360 Decompressor parameters: 39,458,720 Flow model parameters: 50,779,584 Initializing datasets with FULL data... Loading AMP embeddings from /data2/edwardsun/flow_project/peptide_embeddings/... Loading combined embeddings from /data2/edwardsun/flow_project/peptide_embeddings/all_peptide_embeddings.pt (FULL DATA)... ✓ Loaded ALL embeddings: torch.Size([17968, 50, 1280]) Loading CFG data from FASTA: /home/edwardsun/flow/combined_final.fasta... Parsing FASTA file: /home/edwardsun/flow/combined_final.fasta Label assignment: >AP = AMP (0), >sp = Non-AMP (1) ✓ Parsed 6983 valid sequences from FASTA AMP sequences: 3306 Non-AMP sequences: 3677 Masked for CFG: 698 Loaded 6983 CFG sequences Label distribution: [3306 3677] Masked 698 labels for CFG training Aligning AMP embeddings with CFG data... Aligned 6983 samples CFG Flow Dataset initialized: AMP embeddings: torch.Size([17968, 50, 1280]) CFG labels: 6983 Aligned samples: 6983 ✓ Dataset initialized with FULL data: Total samples: 6,983 Batch size: 384 Batches per epoch: 19 Total training steps: 38,000 Validation every: 10,000 steps Initializing optimizer and scheduler... ✓ Optimizer initialized: Base LR: 0.0012 Min LR: 0.0006 Warmup steps: 5000 Weight decay: 0.01 Gradient clip norm: 1.0 ✓ Optimized Single GPU training setup complete with FULL DATA! 🚀 Starting Optimized Single GPU Flow Matching Training with FULL DATA GPU: 0 Total iterations: 2000 Batch size: 384 Total samples: 6,983 Mixed precision: True Estimated time: ~8-10 hours (overnight training with ALL data) ============================================================ Training Flow Model: 0%| | 0/2000 [00:00