Background & Method
few-shot learning(doesn’t really matter in OOD task)
prototype clustering
Apply in Stratified Transformer
dmlnet method
train
crit = nn.NLLLoss(ignore_index=-1)
CE_loss = self.crit(pred, feed_dict[‘seg_label’]
batch_data -> feed_dict
train.py->main
net_encoder = ModelBuilder.build_encoder( arch=cfg.MODEL.arch_encoder.lower(), fc_dim=cfg.MODEL.fc_dim, weights=cfg.MODEL.weights_encoder) net_decoder = ModelBuilder.build_decoder( arch=cfg.MODEL.arch_decoder.lower(), fc_dim=cfg.MODEL.fc_dim, num_class=cfg.DATASET.num_class, weights=cfg.MODEL.weights_decoder)
Where is prototype?
- No prototype in Loss
- Prototype should be in classifier( probability procedure )
- self.centers should be prototypes
- why prototype is 3?
magnitude = 3
features_shape = features.size() # batch * hw * c = torch.Size([6, 8875, 13]) features = features.unsqueeze(2).expand(features_shape[0], features_shape[1], num_classes, features_shape[2]) # batch * hw * num_classes * c
expand copy data, here features.expand() copy data num_classes times; which turns [6, 8875, 13] -> [6, 8875, 13, 13]
design for ST
feats.size() = torch.Size([Batch, 48])
loss_dce should be CrossEntropyLoss or NLLLoss ?
set loss_vl among whole scene? or whole batch? How to tackle the picture concept in Point Cloud
TODO
clearify CrossEntropyLoss and NLLLoss and DCE fumula
proto_loss.py
extract scene from feat & target to compute loss
Computing metric AUPR / AUROC in two ways, traditional or novel, How much difference here(How many percentage)?
Last layer outputs the logits influences?
Super param
看一下mIoU的方法,原始特征还是距离
全部场景 / 只含OOD场景
ST point3D 与 geometry 有冲突
conda activate pointcept && …
修改models/default.py/xxx(module) 以及 backbone()
先调参 提升 PT 的原始模型分割正确率到69.8
然后在PT上浮现DMLNet
然后跑其他的实验
Bug
anaomaly->models->models.py->SegentationModule->forward
- CE_loss = self.crit(pred, feed_dict[‘seg_label’])
pred
should be tensor instead of tuplesolution: pred[0]