Commit Graph

30 Commits

Author SHA1 Message Date
nuluh
2fbdeac1eb refactor(test): update import statement to use data_preprocessing module 2025-07-18 19:29:02 +07:00
nuluh
2504157b29 feat(src): replace convert.py to src/data_preprocessing.py and fix some functions prefix parameter 2025-07-02 03:25:18 +07:00
nuluh
79070921d7 feat(data): add complement_pairs function to generate complement tuples for implementing alternative undamage case method 2025-06-27 10:33:36 +07:00
nuluh
e8eb07a91b refactor(data): improve variable naming in generate_df_tuples function for clarity 2025-06-26 10:53:10 +07:00
nuluh
c98c6a091b refactor(data): update generate_df_tuples function for improved readibility code 2025-06-26 10:51:29 +07:00
nuluh
d0b603ba9f fix(data): Update DataProcessor instantiation for new data preprocessing implementation 2025-06-18 08:30:12 +07:00
nuluh
1164627bac fix(data): Fix export_to_csv to adapt new added undamaged scenario and add new parameter include_time to include 'Time' data 2025-06-18 01:54:12 +07:00
nuluh
58a672a680 fix(data): Fix generate_df_tuples function output bug when special_groups args is passed 2025-06-17 13:20:27 +07:00
nuluh
24c1484300 feat(data): Enhance DataProcessor to support dynamic base path and improve data loading with error handling and memory efficiency 2025-06-16 17:35:27 +07:00
nuluh
60ff4e0fa9 feat(data): Propose new damage file index generation to improve structure and flexibility in DataFrame handling 2025-06-16 03:13:07 +07:00
nuluh
3e652accfb refactor(data): remove unnecessary variable declaration in DataProcessor for loading dataframes 2025-06-14 04:02:42 +07:00
nuluh
66a09e0ddf feat(data): Enhance damage file index generation with undamaged file handling and improved error management (WIP) 2025-06-14 04:02:42 +07:00
nuluh
195f8143f0 refactor(data): remove redundant column extraction method and simplify dataframe loading 2025-06-14 00:57:54 +07:00
nuluh
ebaa263781 chore(convert): comment out create_damage_files obsolete function 2025-06-09 18:59:51 +07:00
nuluh
1511012e11 refactor(test): update test script to generate damage files index for dataset_B and adjust export path for processed data 2025-04-20 16:02:16 +07:00
nuluh
36b36c41ba feat(data): add export_to_csv method for saving processed data into individuals sensor end and update test script
Closes #40
2025-04-17 10:10:19 +07:00
nuluh
ff64f3a3ab refactor(data): update type annotations for damage files index and related classes. Need better implementation 2025-03-22 19:48:50 +07:00
nuluh
58a316d9c8 feat(data): implement damage files index generation and data processing
Closes #38
2025-03-21 15:58:50 +07:00
nuluh
35e25ba4c6 fix(data): ensure output directories are created before saving files
Closes #35
2025-03-16 18:30:52 +07:00
nuluh
c8653f53ea fix(data): update output file naming to include customizable prefix 2025-03-16 13:58:50 +07:00
nuluh
c28e79b022 feat(convert): add prefix parameter to create_damage_files for customizable file naming
Closes #31
2025-03-16 10:57:13 +07:00
nuluh
a1fbe8bd93 feat(convert): Update damage scenarios and output file naming conventions 2024-12-08 18:08:59 +07:00
nuluh
ff5578652f fix(script): Fix bugs taking incorrect column by changing columns and sensor_end_map index number to take the loop of enumeration. 2024-09-03 12:08:53 +07:00
nuluh
db2c5d3a4e feat(script): Update output directory in convert.py 2024-09-03 11:50:44 +07:00
nuluh
ea978de872 - 2024-09-03 11:43:46 +07:00
nuluh
465d257850 feat(script): Add zero-padding to converted CSV filenames for standardize processing pipeline 2024-09-03 11:38:49 +07:00
nuluh
d12eea0acf feat(data-processing): Implement CSV data transformation for SVM analysis
Introduce a Python script for transforming QUGS 2D grid structure data into a simplified 1D beam format suitable for SVM-based damage detection. The script efficiently slices original CSV files into smaller, manageable sets, correlating specific damage scenarios with their corresponding sensor data. This change addresses the challenge of retaining critical damage localization information during the data conversion process, ensuring high-quality, relevant data for 1D analysis.

Closes #20
2024-09-03 11:33:23 +07:00
nuluh
3860f2cc5b fix(docs): The readme.md should belong to raw data since the script is intended to simulate raw data that coming from accelerometer sensors instead of processed data that should be generated by simulating frequency domain data instead. 2024-08-18 10:34:22 +07:00
nuluh
6783cfeb3f docs(readme): Improve data README.md explanation
Update the README.md file in the data/processed directory to provide clearer instructions on how to load the data from the desired Dx_TESTy.csv file. This change enhances the usability of the data files for analysis.
2024-08-15 09:46:50 +07:00
nuluh
153e8cb109 feat(data): Initialize dummy data
- Create a Python script to generate CSV files in a structured folder hierarchy under `data/processed` with specific damage levels and tests.
- Add a `.gitignore` file to exclude CSV files from Git tracking, enhancing data privacy and reducing repository size.
- Include a `README.md` in the `data` directory to clearly document the directory structure, file content, and their intended use for clarity and better usability.

Closes #7
2024-08-14 23:26:06 +07:00