âšī¸ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://papers.cool/arxiv/2509.07756 |
| Last Crawled | 2026-04-14 17:04:53 (1 day ago) |
| First Indexed | 2025-09-11 13:33:35 (7 months ago) |
| HTTP Status Code | 200 |
| Meta Title | Spectral and Rhythm Feature Performance Evaluation for Category and Class Level Audio Classification with Deep Convolutional Neural Networks | Cool Papers - Immersive Paper Discovery |
| Meta Description | Next to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs. |
| Meta Canonical | null |
| Boilerpipe Text | Next to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs.
Subjects
:
Sound
,
Artificial Intelligence
,
Computer Vision and Pattern Recognition
,
Machine Learning
,
Audio and Speech Processing
Publish
:
2025-09-09 13:54:41 UTC |
| Markdown | # 2509\.07756
Total: 1
## [\#1](https://arxiv.org/abs/2509.07756 "1/1") [Spectral and Rhythm Feature Performance Evaluation for Category and Class Level Audio Classification with Deep Convolutional Neural Networks](https://papers.cool/arxiv/2509.07756) [\[PDF\]]() [\[Copy\]]() [\[Kimi1\]]() [\[REL\]]()
**Author**: [Friedrich Wolf-Monheim](https://arxiv.org/search/?searchtype=author&query=Friedrich%20Wolf-Monheim)
Next to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs.
**Subjects**: [Sound](https://papers.cool/arxiv/cs.SD) , [Artificial Intelligence](https://papers.cool/arxiv/cs.AI) , [Computer Vision and Pattern Recognition](https://papers.cool/arxiv/cs.CV) , [Machine Learning](https://papers.cool/arxiv/cs.LG) , [Audio and Speech Processing](https://papers.cool/arxiv/eess.AS)
**Publish**: 2025-09-09 13:54:41 UTC
***
Designed by [kexue.fm](https://kexue.fm/) \| Powered by [kimi.ai](https://kimi.moonshot.cn/?ref=papers.cool)
Include([OR]("The logical relationship between keywords (OR/AND)")):
Exclude:
Search
Filter
Highlight
Stared Paper(s):
\#1 [Spectral and Rhythm Feature Performance Evaluation for Category and Class Level Audio Classification with Deep Convolutional Neural Networks](https://papers.cool/arxiv/2509.07756#2509.07756)
Export
Magic Token:
Kimi Language:
Desc Language:
Save
Bug report? Issue submit? Please visit:
**Github:** <https://github.com/bojone/papers.cool>
Please read our [Disclaimer](https://github.com/bojone/papers.cool/blob/main/Disclaimer/README_en.md) before proceeding.
For more interesting features, please visit [kexue.fm](https://kexue.fm/) and [kimi.ai](https://kimi.moonshot.cn/?ref=papers.cool). |
| Readable Markdown | Next to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs.
**Subjects**: [Sound](https://papers.cool/arxiv/cs.SD) , [Artificial Intelligence](https://papers.cool/arxiv/cs.AI) , [Computer Vision and Pattern Recognition](https://papers.cool/arxiv/cs.CV) , [Machine Learning](https://papers.cool/arxiv/cs.LG) , [Audio and Speech Processing](https://papers.cool/arxiv/eess.AS)
**Publish**: 2025-09-09 13:54:41 UTC
*** |
| Shard | 61 (laksa) |
| Root Hash | 17309916099783778261 |
| Unparsed URL | cool,papers!/arxiv/2509.07756 s443 |