Recent Articles

Open access

ISSN: 2666-6510

Few-shot Named Entity Recognition via encoder and class intervention

In the real world, the large and complex nature of text increases the difficulty of tagging and results in a limited amount of tagged text. Few-shot Named Entity Recognition(NER) only uses a small amount...

CPT: Colorful Prompt Tuning for pre-trained vision-language models

Vision-Language Pre-training (VLP) models have shown promising capabilities in grounding natural language in image data, facilitating a broad range of cross-modal tasks. However, we note that there...

Enhancing neural network classification using fractional-order activation functions

In this paper, a series of novel activation functions is presented, which is derived using the improved Riemann–Liouville conformable fractional derivative (RLCFD). This study investigates the use of...

GPT understands, too

Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU). However, our preliminary study reveals that manual discrete prompts...

Improving trajectory classification through Kramers–Moyal coefficients

Trajectory classification focuses on predicting the class or category of a moving object based on its observed movement over time. The classification of trajectory data using classical approaches can...

MindLLM: Lightweight large language model pre-training, evaluation and domain application

Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial...

Authorship style transfer with inverse transfer data augmentation

Authorship style transfer aims to modify the style of neutral text to match the unique speaking or writing style of a particular individual. While Large Language Models (LLMs) present promising solutions,...

Relation-aware deep neural network enables more efficient biomedical knowledge acquisition from massive literature

Biomedical knowledge is typically organized in a relational scheme, such as chemical-disease relation, gene-disease relation, and gene-pathway relation. Biomedical scientists heavily rely on search...

A study of natural robustness of deep reinforcement learning algorithms towards adversarial perturbations

Deep reinforcement learning (DRL) has been shown to have numerous potential applications in the real world. However, DRL algorithms are still extremely sensitive to noise and adversarial perturbations,...

Wave2Graph: Integrating spectral features and correlations for graph-based learning in sound waves

This paper investigates a novel graph-based representation of sound waves inspired by the physical phenomenon of correlated vibrations. We propose a Wave2Graph framework for integrating multiple acoustic...

CellBoost: A pipeline for machine assisted annotation in neuroanatomy

One of the important yet labor intensive tasks in neuroanatomy is the identification of select populations of cells. Current high-throughput techniques enable marking cells with histochemical fluorescent...

Large language models in law: A survey

The advent of artificial intelligence (AI) has significantly impacted the traditional judicial industry. Moreover, recently, with the development of AI-generated content (AIGC), AI and law have found...

Generating graph perturbations to enhance the generalization of GNNs

Graph neural networks (GNNs) have become the standard approach for performing machine learning on graphs. Such models need large amounts of training data, however, in several graph classification and...

Mining contacts from spatio-temporal trajectories

Contact mining is discovering objects in close proximity in their movements in order to reveal possible interactions, infections, collisions or contacts. This process can be significantly beneficial...

Improving task generalization via unified schema prompt

Task generalization has been a long-standing challenge in Natural Language Processing (NLP). Recent research attempts to improve the task generalization ability of pre-trained language models by mapping...

Associating multiple vision transformer layers for fine-grained image representation

- Accurate discriminative region proposal has an important effect for fine-grained image recognition. The vision transformer (ViT) brings about a striking effect in computer vision due to its innate...

Joint span and token framework for few-shot named entity recognition

Few-shot Named Entity Recognition (NER) is a challenging task that involves identifying new entity types using a limited number of labeled instances for training. Currently, the majority of Few-shot...

MOTT: A new model for multi-object tracking based on green learning paradigm

Multi-object tracking (MOT) is one of the most essential and challenging tasks in computer vision (CV). Unlike object detectors, MOT systems nowadays are more complicated and consist of several neural...

Multi-grained hypergraph interest modeling for conversational recommendation

Conversational recommender system (CRS) interacts with users through multi-turn dialogues in natural language, which aims to provide high-quality recommendations for user’s instant information need....

A unified network embedding algorithm for multi-type similarity measures

Traditional network embedding aims to learn representations by capturing a predefined vertex-to-vertex similarity measure. However, in practice, there are different types of similarity measures (e.g.,...

AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation

Knowledge distillation (KD) is a widely used method for transferring knowledge from large teacher models to computationally efficient student models. Unfortunately, the computational cost of KD becomes...

Stay Informed

Register your interest and receive email alerts tailored to your needs. Sign up below.