×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

In recent years, graph pre-training has gained significant attention,
focusing on acquiring transferable knowledge from unlabeled graph data to
improve downstream performance. Despite these recent endeavors, the problem of
negative transfer remains a major concern when utilizing graph pre-trained
models to downstream tasks. Previous studies made great efforts on the issue of
what to pre-train and how to pre-train by designing a variety of graph
pre-training and fine-tuning strategies. However, there are cases where even
the most advanced "pre-train and fine-tune" paradigms fail to yield distinct
benefits. This paper introduces a generic framework W2PGNN to answer the
crucial question of when to pre-train (i.e., in what situations could we take
advantage of graph pre-training) before performing effortful pre-training or
fine-tuning. We start from a new perspective to explore the complex generative
mechanisms from the pre-training data to downstream data. In particular, W2PGNN
first fits the pre-training data into graphon bases, each element of graphon
basis (i.e., a graphon) identifies a fundamental transferable pattern shared by
a collection of pre-training graphs. All convex combinations of graphon bases
give rise to a generator space, from which graphs generated form the solution
space for those downstream data that can benefit from pre-training. In this
manner, the feasibility of pre-training can be quantified as the generation
probability of the downstream data from any generator in the generator space.
W2PGNN offers three broad applications: providing the application scope of
graph pre-trained models, quantifying the feasibility of pre-training, and
assistance in selecting pre-training data to enhance downstream performance. We
provide a theoretically sound solution for the first application and extensive
empirical justifications for the latter two applications.

Click here to read this post out
ID: 148952; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: May 24, 2023, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 26
CC:
No creative common's license
Comments: