Function Repository Resource:
Import Safe Tensors
Import safetensors binary file
Contributed by:
Nikolay Murzin
ResourceFunction["ImportSafeTensors"][file] return association with tensors from a safetensors binary file. | |
ResourceFunction["ImportSafeTensors"][file,name] return a specific tensor with a given name. | |
ResourceFunction["ImportSafeTensors"][file,{name1,name2,…}] return multiple files. | |
ResourceFunction["ImportSafeTensors"][file,"Header"] return a header containing each tensor's type, dimensions and file offset. |
Details and Options
Safetensors is the format by HuggingFace for a simple, safe way to store and distribute tensors.
Safetensors become a very popular format to distribute weigths for Stable Diffusion neural network on https://civitai.com/.
Examples
Basic Examples (3) 
Import tensors from the wild:
| In[1]:= |
| Out[1]= |
Import a single tensor:
| In[2]:= |
| Out[2]= |
Only import a header:
| In[3]:= |
| Out[3]= |
Neat Examples (4) 
Import DreamShaper weights:
| In[4]:= |
Modify Stable Diffusion V1 parts:
| In[5]:= |
| In[6]:= |
| In[7]:= |
| In[8]:= |
| In[9]:= |
Compare vanilla and modified weights:
| In[10]:= |
| Out[10]= |
| In[11]:= |
| Out[11]= |
Add more details with LoRA:
toMatrix[tensor_] := If[ArrayDepth[tensor] > 2, ReshapeLayer[Dimensions[tensor][[;; 2]]][tensor], tensor] LoRA[weight_, {alpha_, down_, up_}] := With[{scale = Dimensions[up][[2]]}, FunctionLayer[#weight + #alpha / scale #up . #down &][<|"weight" → weight, "alpha" → alpha, "down" → toMatrix[down], "up" → toMatrix[up]|>]] computeLoRA[net_, name : {"transformer", id_, "self-attention", "input_project", "Net", "Weights"}, tensors_] := With[{array = ReshapeLayer[{3, Automatic, 768}] @ NetExtract[net, name]}, CatenateLayer[] @ MapIndexed[ LoRA[PartLayer[#2[[1]]] @ array, #1] &, Table[ tensors[StringTemplate["lora_te_text_model_encoder_layers_``_self_attn_``_proj.``"][id - 1, layer, lora]], {layer, {"q", "k", "v"}}, {lora, {"alpha", "lora_down.weight", "lora_up.weight"}} ] ] ] computeLoRA[net_, name : {"transformer", id_, "self-attention", "output_project", "Net", "Weights"}, tensors_] := computeLoRA[net, name, StringTemplate @ StringTemplate["lora_te_text_model_encoder_layers_``_self_attn_out_proj.``"][id - 1, "``"], tensors] computeLoRA[net_, name : {"transformer", id_, "mlp", linear : "linear1" | "linear2", "Net", "Weights"}, tensors_] := computeLoRA[net, name, StringTemplate @ StringTemplate["lora_te_text_model_encoder_layers_``_mlp_``.``"][id - 1, StringReplace[linear, "linear" → "fc"], "``"], tensors] computeLoRA[net_, name : {block : "up" | "down" | "cross_mid", bid_Integer : -1, transformerId_String, rest : PatternSequence[___, "Weights"]}, tensors_] := With[{ suffix = Replace[{rest}, { {"proj_in", "Weights"} → "proj_in", {"proj_out", "Weights"} → "proj_out", {"transformerBlock1", "ff", "Net", linear : "proj" | "linear", "Weights"} ⧴ "transformer_blocks_0_ff_" <> Replace[linear, {"proj" → "net_0_proj", "linear" → "net_2"}], {"transformerBlock1", attn : "self-attention" | "cross-attention", layer : "query" | "value" | "key" | "output", "Net", "Weights"} ⧴ StringTemplate["transformer_blocks_0_``_to_``"][ Replace[attn, {"self-attention" → "attn1", "cross-attention" → "attn2"}], Replace[layer, {"query" → "q", "value" → "v", "key" → "k", "output" → "out_0"}] ], _ ⧴ Missing[name] }] }, If[MissingQ[suffix], Return[suffix]]; computeLoRA[ net, name, StringTemplate @ StringTemplate["lora_unet_````_attentions_``_" <> suffix <> ".``"][ Replace[block, {"cross_mid" → "mid_block", upOrDown_ ⧴ upOrDown <> "_blocks_"}], If[bid > 0, bid - 1, ""], Interpreter["Integer"][StringTake[transformerId, -1]] - 1, "``" ], tensors ] ] computeLoRA[net_, name_, template_TemplateObject, tensors_] := With[{array = NetExtract[net, name]}, LoRA[array, tensors[template[#]] & /@ {"alpha", "lora_down.weight", "lora_up.weight"}]] computeLoRA[_, name_, _] := Missing[name]| In[12]:= |
| In[13]:= |
| In[14]:= |
| In[15]:= |
| In[16]:= |
| Out[16]= |