YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (2024)

秋招面试专栏推荐深度学习算法工程师面试问题总结【百面算法工程师】——点击即可跳转

💡💡💡本专栏所有程序均经过测试,可成功执行💡💡💡

专栏目录 :

本文给大家带来的是CloFormer,CloFormer是一种轻量级视觉变换器,通过上下文感知的局部增强提高性能。它结合了全局共享权重和特定上下文权重,提出AttnConv卷积运算符来聚合和增强局部特征。实验显示CloFormer在多种视觉任务上表现优异。文章在介绍主要的原理后,将手把手教学如何进行模块的代码添加和修改,并将修改后的完整代码放在文章的最后方便大家一键运行,小白也可轻松上手实践。以帮助您更好地学习深度学习目标检测YOLO系列的挑战。

专栏地址YOLOv8改进——更新各种有效涨点方法——点击即可跳转订阅学习不迷路

目录

1. 原理

2. 将C2f_CloAtt代码添加到YOLOv8中

2.1C2f_CloAtt的代码实现

2.2 更改__init__.py文件

2.3 添加yaml文件

2.4 注册模块

2.5执行程序

3. 完整代码分享

4. GFLOPs

5. 进阶

6. 总结

1. 原理

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (1)

论文地址:Rethinking Local Perception in Lightweight Vision Transformer——点击即可跳转

官方代码:官方代码仓库——点击即可跳转

CloFormer 是一款轻量级视觉转换器,旨在平衡效率和性能,尤其适用于移动友好型应用。它引入了一个名为 AttnConv 的新模块,通过整合共享权重和上下文感知权重的优势来增强局部感知,这两个权重通常分别在卷积神经网络 (CNN) 和转换器中使用。

CloFormer 的关键原则:

上下文感知局部增强 (AttnConv):

  • AttnConv 是 CloFormer 的核心创新,结合了传统卷积和注意力机制的元素。它使用深度卷积 (DWconv) 通过全局共享权重来收集局部信息,这是 CNN 的特点。随后是门控机制,该机制生成上下文感知权重以增强这些局部特征。与传统的自注意力相比,该机制引入了更强的非线性,使模型能够更好地捕捉图像中的高频细节。

双分支架构

  • CloFormer 采用双分支结构。一个分支使用 AttnConv 捕获高频局部信息,而另一个分支使用原始注意机制的修改版本捕获低频全局信息。全局分支通过在处理键和值标记之前对其进行下采样来减少计算负载,从而使模型更高效。

局部和全局信息的融合

  • 局部和全局分支的输出被连接起来,然后通过全连接层进行处理。这种设计使 CloFormer 能够同时捕获和利用图像中的高频(详细)和低频(全局)信息,这对于图像分类、对象检测和语义分割等任务至关重要。

效率和性能

  • CloFormer 旨在以有限的计算资源有效运行,使其适合部署在移动设备上。与其他轻量级模型相比,它以更少的参数和更低的 FLOP(每秒浮点运算次数)实现了具有竞争力的准确度。

应用:

  • CloFormer 已在各种视觉任务(如图像分类、对象检测和语义分割)中进行了测试,并在每项任务中都表现出色。

总之,CloFormer 创新地结合了卷积网络和变换器的优势,特别是在处理局部和全局信息方面,使其成为移动和边缘应用的强大而高效的模型。

2. 将C2f_CloAtt代码添加到YOLOv8中

2.1C2f_CloAtt的代码实现

关键步骤一:将下面代码粘贴到在/ultralytics/ultralytics/nn/modules/block.py中,并在该文件的__all__中添加“C2f_CloAtt”

from efficientnet_pytorch.model import MemoryEfficientSwishclass AttnMap(nn.Module): def __init__(self, dim): super().__init__() self.act_block = nn.Sequential( nn.Conv2d(dim, dim, 1, 1, 0), MemoryEfficientSwish(), nn.Conv2d(dim, dim, 1, 1, 0) ) def forward(self, x): return self.act_block(x)class EfficientAttention(nn.Module): def __init__(self, dim, num_heads=8, group_split=[4, 4], kernel_sizes=[5], window_size=4, attn_drop=0., proj_drop=0., qkv_bias=True): super().__init__() assert sum(group_split) == num_heads assert len(kernel_sizes) + 1 == len(group_split) self.dim = dim self.num_heads = num_heads self.dim_head = dim // num_heads self.scalor = self.dim_head ** -0.5 self.kernel_sizes = kernel_sizes self.window_size = window_size self.group_split = group_split convs = [] act_blocks = [] qkvs = [] for i in range(len(kernel_sizes)): kernel_size = kernel_sizes[i] group_head = group_split[i] if group_head == 0: continue convs.append(nn.Conv2d(3*self.dim_head*group_head, 3*self.dim_head*group_head, kernel_size, 1, kernel_size//2, groups=3*self.dim_head*group_head)) act_blocks.append(AttnMap(self.dim_head*group_head)) qkvs.append(nn.Conv2d(dim, 3*group_head*self.dim_head, 1, 1, 0, bias=qkv_bias)) if group_split[-1] != 0: self.global_q = nn.Conv2d(dim, group_split[-1]*self.dim_head, 1, 1, 0, bias=qkv_bias) self.global_kv = nn.Conv2d(dim, group_split[-1]*self.dim_head*2, 1, 1, 0, bias=qkv_bias) self.avgpool = nn.AvgPool2d(window_size, window_size) if window_size!=1 else nn.Identity() self.convs = nn.ModuleList(convs) self.act_blocks = nn.ModuleList(act_blocks) self.qkvs = nn.ModuleList(qkvs) self.proj = nn.Conv2d(dim, dim, 1, 1, 0, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop) self.proj_drop = nn.Dropout(proj_drop) def high_fre_attntion(self, x: torch.Tensor, to_qkv: nn.Module, mixer: nn.Module, attn_block: nn.Module): ''' x: (b c h w) ''' b, c, h, w = x.size() qkv = to_qkv(x) #(b (3 m d) h w) qkv = mixer(qkv).reshape(b, 3, -1, h, w).transpose(0, 1).contiguous() #(3 b (m d) h w) q, k, v = qkv #(b (m d) h w) attn = attn_block(q.mul(k)).mul(self.scalor) attn = self.attn_drop(torch.tanh(attn)) res = attn.mul(v) #(b (m d) h w) return res def low_fre_attention(self, x : torch.Tensor, to_q: nn.Module, to_kv: nn.Module, avgpool: nn.Module): ''' x: (b c h w) ''' b, c, h, w = x.size() q = to_q(x).reshape(b, -1, self.dim_head, h*w).transpose(-1, -2).contiguous() #(b m (h w) d) kv = avgpool(x) #(b c h w) kv = to_kv(kv).view(b, 2, -1, self.dim_head, (h*w)//(self.window_size**2)).permute(1, 0, 2, 4, 3).contiguous() #(2 b m (H W) d) k, v = kv #(b m (H W) d) attn = self.scalor * q @ k.transpose(-1, -2) #(b m (h w) (H W)) attn = self.attn_drop(attn.softmax(dim=-1)) res = attn @ v #(b m (h w) d) res = res.transpose(2, 3).reshape(b, -1, h, w).contiguous() return res def forward(self, x: torch.Tensor): ''' x: (b c h w) ''' res = [] for i in range(len(self.kernel_sizes)): if self.group_split[i] == 0: continue res.append(self.high_fre_attntion(x, self.qkvs[i], self.convs[i], self.act_blocks[i])) if self.group_split[-1] != 0: res.append(self.low_fre_attention(x, self.global_q, self.global_kv, self.avgpool)) return self.proj_drop(self.proj(torch.cat(res, dim=1)))class Bottleneck_CloAtt(Bottleneck): """Standard bottleneck With CloAttention.""" def __init__(self, c1, c2, shortcut=True, g=1, k=..., e=0.5): super().__init__(c1, c2, shortcut, g, k, e) self.attention = EfficientAttention(c2) def forward(self, x): """'forward()' applies the YOLOv5 FPN to input data.""" return x + self.attention(self.cv2(self.cv1(x))) if self.add else self.attention(self.cv2(self.cv1(x)))class C2f_CloAtt(C2f): def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): super().__init__(c1, c2, n, shortcut, g, e) self.m = nn.ModuleList(Bottleneck_CloAtt(self.c, self.c, shortcut, g, k=(3, 3), e=1.0) for _ in range(n))

CloFormer处理图像的流程可以概括为以下几个关键步骤,每个步骤对应模型中的不同模块和操作:

1. 输入处理和初始特征提取

  • 图像输入:输入图像首先被分成不重叠的图像块(patches),类似于传统的Vision Transformer (ViT)。

  • 线性嵌入:每个图像块通过线性层进行嵌入,将其映射到一个高维空间中,形成初始的特征表示。

2. 局部与全局信息处理分支

CloFormer具有两个主要分支,分别处理局部高频信息和全局低频信息:

  • 局部分支(AttnConv)

    • 局部感知(DWConv):在这一分支中,使用深度卷积(Depthwise Convolution, DWConv)来提取局部信息。DWConv通过共享权重,在不增加计算量的情况下,有效地捕捉图像块中的局部特征。

    • 上下文感知权重(Context-Aware Weights):随后,通过一个门控机制,根据输入的上下文生成特定的权重,对提取的局部特征进行调整。这一步通过引入非线性,使得模型能够更好地增强局部细节。

  • 全局分支(Modified Global Attention)

    • 降采样处理:全局分支在处理之前,首先对键和值(Key和Value)进行降采样,以减少计算复杂度。

    • 注意力机制:随后应用一个简化的全局注意力机制,在较低分辨率下计算全局的特征表示。这有助于模型捕捉图像中的全局低频信息,如大范围的结构和形状。

3. 特征融合

  • 通道拼接:来自局部分支和全局分支的特征在通道维度上进行拼接,将两种不同频率的信息融合在一起。

  • 全连接层:拼接后的特征通过全连接层进行处理,从而生成更加综合的特征表示。

4. 输出预测

  • 分类头或其他任务头:最终融合后的特征输入到任务特定的头部(例如分类头、检测头),用于生成最终的预测结果。这个步骤根据任务的不同,可以是图像分类、目标检测、语义分割等。

总结

CloFormer通过其独特的双分支架构,首先将输入图像块进行嵌入和特征提取,然后分别通过局部感知分支(AttnConv)和全局感知分支处理局部和全局信息,最后将两者融合进行进一步处理,生成最终的任务预测结果。这样的设计使得CloFormer能够在保持计算效率的同时,有效地捕捉图像中的细节和整体结构。

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (2)

2.2 更改__init__.py文件

关键步骤二:修改modules文件夹下的__init__.py文件,先导入函数

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (3)

然后在下面的__all__中声明函数

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (4)

2.3 添加yaml文件

关键步骤三:在/ultralytics/ultralytics/cfg/models/v8下面新建文件yolov8_C2f_CloAtt.yaml文件,粘贴下面的内容

  • OD【目标检测】
# Ultralytics YOLO 🚀, AGPL-3.0 license# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parametersnc: 80 # number of classesscales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbonebackbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f_CloAtt, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f_CloAtt, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n headhead: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 18 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 21 (P5/32-large) - [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)
  • Seg【语义分割】
# Ultralytics YOLO 🚀, AGPL-3.0 license# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parametersnc: 80 # number of classesscales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbonebackbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f_CloAtt, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f_CloAtt, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n headhead: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 18 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 21 (P5/32-large) - [[15, 18, 21], 1, Segment, [nc, 32, 256]] # Segment(P3, P4, P5)

温馨提示:因为本文只是对yolov8基础上添加模块,如果要对yolov8n/l/m/x进行添加则只需要指定对应的depth_multiple 和 width_multiple。不明白的同学可以看这篇文章:yolov8yaml文件解读——点击即可跳转

# YOLOv8ndepth_multiple: 0.33 # model depth multiplewidth_multiple: 0.25 # layer channel multiplemax_channels: 1024 # max_channels # YOLOv8sdepth_multiple: 0.33 # model depth multiplewidth_multiple: 0.50 # layer channel multiplemax_channels: 1024 # max_channels # YOLOv8l depth_multiple: 1.0 # model depth multiplewidth_multiple: 1.0 # layer channel multiplemax_channels: 512 # max_channels # YOLOv8mdepth_multiple: 0.67 # model depth multiplewidth_multiple: 0.75 # layer channel multiplemax_channels: 768 # max_channels # YOLOv8xdepth_multiple: 1.33 # model depth multiplewidth_multiple: 1.25 # layer channel multiplemax_channels: 512 # max_channels

2.4 注册模块

关键步骤四:在task.py的parse_model函数添加C2f_CloAtt下面的内容

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (5)

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (6)

2.5执行程序

在train.py中,将model的参数路径设置为yolov8_C2f_CloAtt.yaml的路径

建议大家写绝对路径,确保一定能找到

from ultralytics import YOLOimport warningswarnings.filterwarnings('ignore')from pathlib import Path if __name__ == '__main__': # 加载模型 model = YOLO("ultralytics/cfg/v8/yolov8.yaml") # 你要选择的模型yaml文件地址 # Use the model results = model.train(data=r"你的数据集的yaml文件地址", epochs=100, batch=16, imgsz=640, workers=4, name=Path(model.cfg).stem) # 训练模型

🚀运行程序,如果出现下面的内容则说明添加成功🚀

 from n params module arguments 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] 2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True] 3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2] 4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True] 5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2] 6 -1 2 240128 ultralytics.nn.modules.block.C2f_CloAtt [128, 128, 2, True] 7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2] 8 -1 1 539648 ultralytics.nn.modules.block.C2f_CloAtt [256, 256, 1, True] 9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1] 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1] 16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1] 18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1] 19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1] 21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1] 22 [15, 18, 21] 1 897664 ultralytics.nn.modules.head.Detect [80, [64, 128, 256]]YOLOv8_C2f_CloAtt summary: 276 layers, 3279056 parameters, 3279040 gradients, 9.0 GFLOPs

3. 完整代码分享

https://pan.baidu.com/s/1Qy2va41OxbmHA-25XUi7oA?pwd=571i

提取码: 571i

4. GFLOPs

关于GFLOPs的计算方式可以查看百面算法工程师 | 卷积基础知识——Convolution

未改进的YOLOv8nGFLOPs

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (7)

改进后的GFLOPs

现在手上没有卡了,等过段时候有卡了把这补上,需要的同学自己测一下

5. 进阶

可以与其他的注意力机制或者损失函数等结合,进一步提升检测效果

6. 总结

CloFormer是一种轻量级视觉Transformer,专为移动设备设计,通过引入AttnConv模块有效地平衡了模型的性能与计算效率。CloFormer的核心创新在于其AttnConv模块,它结合了卷积神经网络(CNN)和自注意力机制的优势。在AttnConv中,首先使用深度卷积(DWconv)利用全局共享权重来聚合局部信息,然后通过一种门控机制生成上下文感知的权重,以增强这些局部特征。这种设计比传统的局部自注意力机制引入了更强的非线性,使模型能够更好地捕捉图像中的高频细节信息。

CloFormer采用了双分支架构,一支通过AttnConv处理高频的局部信息,另一支通过简化的标准注意力机制处理低频的全局信息。为了减少计算量,全局分支在计算注意力之前对键和值(Key和Value)进行了降采样。两个分支的输出会在通道维度上进行拼接,随后经过全连接层处理,从而使模型能够同时捕捉并利用图像中的高频与低频信息。

CloFormer的设计目标是在有限的计算资源下仍能表现出色,特别适用于移动设备。它在图像分类、目标检测和语义分割等视觉任务中展示了卓越的性能,且相比其他轻量级模型,具有更少的参数和更低的FLOPs(浮点运算次数),充分体现了其高效性和性能优越性。

YOLOv8改进 | 融合改进 | C2f融合轻量化视觉Transformer【完整代码】-CSDN博客 (2024)

References

Top Articles
ANALYSIS | Why Danielle Smith might move Alberta ministries from Edmonton to UCP-friendly places | CBC News
Calgary calls: Going to Moh'kinstsis
Funny Roblox Id Codes 2023
San Angelo, Texas: eine Oase für Kunstliebhaber
Golden Abyss - Chapter 5 - Lunar_Angel
Www.paystubportal.com/7-11 Login
Evil Dead Rise Showtimes Near Massena Movieplex
Steamy Afternoon With Handsome Fernando
fltimes.com | Finger Lakes Times
Detroit Lions 50 50
18443168434
Newgate Honda
Zürich Stadion Letzigrund detailed interactive seating plan with seat & row numbers | Sitzplan Saalplan with Sitzplatz & Reihen Nummerierung
Grace Caroline Deepfake
978-0137606801
Nwi Arrests Lake County
Missed Connections Dayton Ohio
Justified Official Series Trailer
London Ups Store
Committees Of Correspondence | Encyclopedia.com
Jinx Chapter 24: Release Date, Spoilers & Where To Read - OtakuKart
How Much You Should Be Tipping For Beauty Services - American Beauty Institute
Apply for a credit card
Sizewise Stat Login
VERHUURD: Barentszstraat 12 in 'S-Gravenhage 2518 XG: Woonhuis.
Unforeseen Drama: The Tower of Terror’s Mysterious Closure at Walt Disney World
Ups Print Store Near Me
C&T Wok Menu - Morrisville, NC Restaurant
How Taraswrld Leaks Exposed the Dark Side of TikTok Fame
University Of Michigan Paging System
Dashboard Unt
Access a Shared Resource | Computing for Arts + Sciences
2023 Ford Bronco Raptor for sale - Dallas, TX - craigslist
Speechwire Login
Healthy Kaiserpermanente Org Sign On
Restored Republic
Progressbook Newark
Lawrence Ks Police Scanner
3473372961
Everstart Jump Starter Manual Pdf
Hypixel Skyblock Dyes
Senior Houses For Sale Near Me
Flashscore.com Live Football Scores Livescore
Ksu Sturgis Library
Trivago Myrtle Beach Hotels
Thotsbook Com
Funkin' on the Heights
Caesars Rewards Loyalty Program Review [Previously Total Rewards]
Marcel Boom X
Www Pig11 Net
Ty Glass Sentenced
Latest Posts
Article information

Author: Rev. Leonie Wyman

Last Updated:

Views: 6263

Rating: 4.9 / 5 (79 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Rev. Leonie Wyman

Birthday: 1993-07-01

Address: Suite 763 6272 Lang Bypass, New Xochitlport, VT 72704-3308

Phone: +22014484519944

Job: Banking Officer

Hobby: Sailing, Gaming, Basketball, Calligraphy, Mycology, Astronomy, Juggling

Introduction: My name is Rev. Leonie Wyman, I am a colorful, tasty, splendid, fair, witty, gorgeous, splendid person who loves writing and wants to share my knowledge and understanding with you.