I can’t reply you with my main account as the system indicated that I am a spam after I posted my issue in the Forum.. I guess this is another bug that must be reported to your Forum admins.
The issue is still there. Here is the video recording of the issue:
when I locally unzip and train it works — but im not rich I like the access ultralytics give you for very high end gpus and the interface is amzzzzzzzinngggg
GOOD JOB ON THE NEW PLATFORM site
GOOD JOB!
look i didnt upload 300,000 images, just 5000 , and each file is 9kb
LLM PROMPT IDEA````
give me a service idea for a service where you ingest ZIP or TAR files and , make them a dataset
should allow large dataset of 30,000 files , should allow jobs of a lot of users —
think like a UNIX WIZZARD you are an expect with bash and pipelines —
List 5 common issues before in the plan , and to avoid it
`````
Thanks for sharing the reference ZIP — that’s really helpful. If it unzips and trains locally, this does look more like an Ultralytics Platform ingestion issue than a dataset-format issue. 5,000 tiny files should still be well within the supported upload flow; Platform accepts .zip archives up to 10 GB and then runs validation, label parsing, and stats generation as described in the dataset upload docs.
If you can, please confirm whether this happened on a brand-new dataset or when adding files to an existing dataset, since that was related to the earlier fix. As a temporary workaround, try uploading into a fresh dataset or splitting the archive into 2–3 smaller ZIPs.
And thanks for the kind words — credit goes to the Ultralytics team and community