作者进行了很多实验,发现Temperature scaling对深度网络的置信度校准性能更好
Dropout as a Bayesian Approximation Representing Model Uncertainty in Deep Learning
Posted on
Edited on
In
Uncertainty in deep learning
,
Dropout Uncertainty
,
Gaussian Processes
Views:
Valine:
Symbols count in article: 7.6k Reading time ≈ 7 mins.
Symbols count in article: 7.6k Reading time ≈ 7 mins.
从 Bayesian 角度,解释了 why dropout works,以及如何对dropout神经网络的不确定性进行建模,可以将dropout training in deep network 看作是 deep Gaussian processes
What Uncertainties Do We Need in Bayesian DeepLearning for Computer Vision?
Posted on
Edited on
In
Uncertainty in deep learning
,
Dropout Uncertainty
,
Computer Vision
Views:
Valine:
Symbols count in article: 7.9k Reading time ≈ 7 mins.
Symbols count in article: 7.9k Reading time ≈ 7 mins.
利用蒙特卡洛dropout来对认知不确定性进行建模,损失函数进行 MAP推断 从而对偶然不确定性进行建模
Aleatory or epistemic? Does it matter?
Posted on
Edited on
In
Uncertainty in deep learning
Views:
Valine:
Symbols count in article: 11k Reading time ≈ 10 mins.
Symbols count in article: 11k Reading time ≈ 10 mins.
文章从工程问题等对模型精确度要求高的应用说起,分析了建模中基本变量、模型、参数的两种不确定性:Aleatory or epistemic uncertainty
The Neuro-Symbolic Concept Learner Interpreting Scenes Words and Sentences From Natural Supervision
Posted on
Edited on
In
Visual Reasoning
Views:
Valine:
Symbols count in article: 12k Reading time ≈ 11 mins.
Symbols count in article: 12k Reading time ≈ 11 mins.
符号主义和连接主义的结合。符号主义体现在程序执行器program execution:定义了DSL,学习到显示的嵌套任务,每个在执行任务时,根据定义执行每个DSL定义的功能,没有参数需要学习;连接主义体现在视觉感知R-CNN和自然语言处理时的autoencoder模型
Neural Module Networks
Posted on
Edited on
In
Visual Reasoning
,
VQA
,
Neural Module Networks
Views:
Valine:
Symbols count in article: 12k Reading time ≈ 11 mins.
Symbols count in article: 12k Reading time ≈ 11 mins.
Neural Module Networks 用于 VQA
DL2- TRAINING AND QUERYING NEURAL NETWORKS WITH LOGIC
Posted on
Edited on
In
Neural Networks with Logic Rules
Views:
Valine:
Symbols count in article: 9k Reading time ≈ 8 mins.
Symbols count in article: 9k Reading time ≈ 8 mins.
将包含逻辑比较符号以及合取析取求反的rules通过论文中给出的转化方式转化成几乎处处可微的损失函数,基于标准梯度方法进行优化
Harnessing Deep Neural Networks with Logic Rules
Posted on
Edited on
In
Neural Networks with Logic Rules
,
First Order Logic Rules
,
Knowledge Distillation
,
Posterior Regularization
Views:
Valine:
Symbols count in article: 13k Reading time ≈ 11 mins.
Symbols count in article: 13k Reading time ≈ 11 mins.
将一阶逻辑 rules 表示的知识通过正则化(后验正则化)的方式加入NN模型参数的更新中,迭代同时训练teacher network和student network,其中teacher network的构造有直接的闭式解,每次迭代可直接得到概率分布
A Short Introduction to Probabilistic Soft Logic 阅读笔记
Posted on
Edited on
In
PSL
,
Neural Networks with Logic Rules
,
First Order Logic Rules
Views:
Valine:
Symbols count in article: 7.7k Reading time ≈ 7 mins.
Symbols count in article: 7.7k Reading time ≈ 7 mins.
概率软逻辑(Probabilistic soft logic PSL)为用户之间的不同类型的关系(例如友谊或家庭关系)建模,而且还可以对多种相似性概念。当样本不符合IID,可用PSL来建模
Neural Aspect and Opinion Term Extraction with Mined Rules as Weak Supervision
Posted on
Edited on
In
Distant Supervision
,
Neural Networks with Logic Rules
,
Sentiment Analysis
Views:
Valine:
Symbols count in article: 11k Reading time ≈ 10 mins.
Symbols count in article: 11k Reading time ≈ 10 mins.
提出了一种基于BiLSTM-CRF(双向LSTM条件随机场)的神经模型,用于aspect and opinion term extraction,训练数据使用人类注释数据作为基本事实ground truth 监督和用mined规则注释的数据作为弱监督进行训练,类似半监督,但是这个方法在利用未标记的数据时引入了学习到的领域知识,所以和远程监督利用外部KB也有关。