đŸ•ˇī¸ Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 61 (from laksa060)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

â„šī¸ Skipped - page is already crawled

📄
INDEXABLE
✅
CRAWLED
1 day ago
🤖
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0 months ago
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://papers.cool/arxiv/2509.07756
Last Crawled2026-04-14 17:04:53 (1 day ago)
First Indexed2025-09-11 13:33:35 (7 months ago)
HTTP Status Code200
Meta TitleSpectral and Rhythm Feature Performance Evaluation for Category and Class Level Audio Classification with Deep Convolutional Neural Networks | Cool Papers - Immersive Paper Discovery
Meta DescriptionNext to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs.
Meta Canonicalnull
Boilerpipe Text
Next to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs. Subjects : Sound , Artificial Intelligence , Computer Vision and Pattern Recognition , Machine Learning , Audio and Speech Processing Publish : 2025-09-09 13:54:41 UTC
Markdown
# 2509\.07756 Total: 1 ## [\#1](https://arxiv.org/abs/2509.07756 "1/1") [Spectral and Rhythm Feature Performance Evaluation for Category and Class Level Audio Classification with Deep Convolutional Neural Networks](https://papers.cool/arxiv/2509.07756) [\[PDF\]]() [\[Copy\]]() [\[Kimi1\]]() [\[REL\]]() **Author**: [Friedrich Wolf-Monheim](https://arxiv.org/search/?searchtype=author&query=Friedrich%20Wolf-Monheim) Next to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs. **Subjects**: [Sound](https://papers.cool/arxiv/cs.SD) , [Artificial Intelligence](https://papers.cool/arxiv/cs.AI) , [Computer Vision and Pattern Recognition](https://papers.cool/arxiv/cs.CV) , [Machine Learning](https://papers.cool/arxiv/cs.LG) , [Audio and Speech Processing](https://papers.cool/arxiv/eess.AS) **Publish**: 2025-09-09 13:54:41 UTC *** Designed by [kexue.fm](https://kexue.fm/) \| Powered by [kimi.ai](https://kimi.moonshot.cn/?ref=papers.cool) Include([OR]("The logical relationship between keywords (OR/AND)")): Exclude: Search Filter Highlight Stared Paper(s): \#1 [Spectral and Rhythm Feature Performance Evaluation for Category and Class Level Audio Classification with Deep Convolutional Neural Networks](https://papers.cool/arxiv/2509.07756#2509.07756) Export Magic Token: Kimi Language: Desc Language: Save Bug report? Issue submit? Please visit: **Github:** <https://github.com/bojone/papers.cool> Please read our [Disclaimer](https://github.com/bojone/papers.cool/blob/main/Disclaimer/README_en.md) before proceeding. For more interesting features, please visit [kexue.fm](https://kexue.fm/) and [kimi.ai](https://kimi.moonshot.cn/?ref=papers.cool).
Readable Markdown
Next to decision tree and k-nearest neighbours algorithms deep convolutional neural networks (CNNs) are widely used to classify audio data in many domains like music, speech or environmental sounds. To train a specific CNN various spectral and rhythm features like mel-scaled spectrograms, mel-frequency cepstral coefficients (MFCC), cyclic tempograms, short-time Fourier transform (STFT) chromagrams, constant-Q transform (CQT) chromagrams and chroma energy normalized statistics (CENS) chromagrams can be used as digital image input data for the neural network. The performance of these spectral and rhythm features for audio category level as well as audio class level classification is investigated in detail with a deep CNN and the ESC-50 dataset with 2,000 labeled environmental audio recordings using an end-to-end deep learning pipeline. The evaluated metrics accuracy, precision, recall and F1 score for multiclass classification clearly show that the mel-scaled spectrograms and the mel-frequency cepstral coefficients (MFCC) perform significantly better then the other spectral and rhythm features investigated in this research for audio classification tasks using deep CNNs. **Subjects**: [Sound](https://papers.cool/arxiv/cs.SD) , [Artificial Intelligence](https://papers.cool/arxiv/cs.AI) , [Computer Vision and Pattern Recognition](https://papers.cool/arxiv/cs.CV) , [Machine Learning](https://papers.cool/arxiv/cs.LG) , [Audio and Speech Processing](https://papers.cool/arxiv/eess.AS) **Publish**: 2025-09-09 13:54:41 UTC ***
Shard61 (laksa)
Root Hash17309916099783778261
Unparsed URLcool,papers!/arxiv/2509.07756 s443