API文档示例

API文档示例


按标题(模糊)查询论文接口

<h5>简要描述</h5> <ul> <li>按标题查询接口,用于页面的搜索功能</li> </ul> <h5>请求URL</h5> <ul> <li><code>http://xx.xx/searchPaperByTitle</code></li> </ul> <h5>请求方式</h5> <ul> <li>POST </li> </ul> <h5>参数</h5> <table> <thead> <tr> <th style="text-align: left;">参数名</th> <th style="text-align: left;">必选</th> <th style="text-align: left;">类型</th> <th>说明</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">searchWord</td> <td style="text-align: left;">是</td> <td style="text-align: left;">string</td> <td>按标题搜索的词(模糊搜索)</td> </tr> </tbody> </table> <h5>请求体示例(JSON格式)</h5> <pre><code> {"searchWord":"study"}</code></pre> <h5>返回示例</h5> <pre><code>[ { "releasetime": "06 October 2018", "typeandyear": "ECCV 2018", "link": "https://doi.org/10.1007/978-3-030-01252-6_13", "abstractcontext": "Despite tremendous progress achieved in temporal action localization, state-of-the-art methods still struggle to train accurate models when annotated data is scarce. In this paper, we introduce a novel active learning framework for temporal localization that aims to mitigate this data dependency issue. We equip our framework with active selection functions that can reuse knowledge from previously annotated datasets. We study the performance of two state-of-the-art active selection functions as well as two widely used active learning baselines. To validate the effectiveness of each one of these selection functions, we conduct simulated experiments on ActivityNet. We find that using previously acquired knowledge as a bootstrapping source is crucial for active learners aiming to localize actions. When equipped with the right selection function, our proposed framework exhibits significantly better performance than standard active learning strategies, such as uncertainty sampling. Finally, we employ our framework to augment the newly compiled Kinetics action dataset with ground-truth temporal annotations. As a result, we collect Kinetics-Localization, a novel large-scale dataset for temporal action localization, which contains more than 15K YouTube videos.", "id": 450, "keyword": "Video understanding,Temporal action localization,Active learning,Video annotation", "title": "What Do I Annotate Next? An Empirical Study of Active Learning for Action Localization" }, { "releasetime": "06 October 2018", "typeandyear": "ECCV 2018", "link": "https://doi.org/10.1007/978-3-030-01270-0_37", "abstractcontext": "This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed framework explicitly learns a degradation transform for the original video inputs, in order to optimize the trade-off between target task performance and the associated privacy budgets on the degraded video. A notable challenge is that the privacy budget, often defined and measured in task-driven contexts, cannot be reliably indicated using any single model performance, because a strong protection of privacy has to sustain against any possible model that tries to hack privacy information. Such an uncommon situation has motivated us to propose two strategies, i.e., budget model restarting and ensemble, to enhance the generalization of the learned degradation on protecting privacy against unseen hacker models. Novel training strategies, evaluation protocols, and result visualization methods have been designed accordingly. Two experiments on privacy-preserving action recognition, with privacy budgets defined in various ways, manifest the compelling effectiveness of the proposed framework in simultaneously maintaining high target task (action recognition) performance while suppressing the privacy breach risk. The code is available at https://github.com/wuzhenyusjtu/Privacy-AdversarialLearning.", "id": 687, "keyword": "Visual privacy,Adversarial training,Action recognition", "title": "Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study" }, ...... ]</code></pre> <h5>返回参数说明</h5> <table> <thead> <tr> <th style="text-align: left;">参数名</th> <th style="text-align: left;">类型</th> <th>说明</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">releasetime</td> <td style="text-align: left;">String</td> <td>论文的最后更新时间</td> </tr> <tr> <td style="text-align: left;">typeandyear</td> <td style="text-align: left;">String</td> <td>论文的类型和发布年份</td> </tr> <tr> <td style="text-align: left;">link</td> <td style="text-align: left;">String</td> <td>论文的链接</td> </tr> <tr> <td style="text-align: left;">abstractcontext</td> <td style="text-align: left;">String</td> <td>论文的摘要</td> </tr> <tr> <td style="text-align: left;">id</td> <td style="text-align: left;">int</td> <td>论文的id</td> </tr> <tr> <td style="text-align: left;">keyword</td> <td style="text-align: left;">String</td> <td>论文的关键词集合</td> </tr> <tr> <td style="text-align: left;">title</td> <td style="text-align: left;">String</td> <td>论文的标题</td> </tr> </tbody> </table> <h5>备注</h5> <ul> <li>返回的结果为JSONArray对象</li> </ul>

页面列表

ITEM_HTML