πŸ“‹ Model Description

Quantization made by Richard Erkhov.

Github

Discord

Request more models

gemma-2b-it-finetuned-mental-health-qa - GGUF

  • Model creator: https://huggingface.co/GuelGaMesh01/
  • Original model: https://huggingface.co/GuelGaMesh01/gemma-2b-it-finetuned-mental-health-qa/

NameQuant methodSize
gemma-2b-it-finetuned-mental-health-qa.Q2K.ggufQ2K1.08GB
gemma-2b-it-finetuned-mental-health-qa.IQ3XS.ggufIQ3XS1.16GB
gemma-2b-it-finetuned-mental-health-qa.IQ3S.ggufIQ3S1.2GB
gemma-2b-it-finetuned-mental-health-qa.Q3KS.ggufQ3K_S1.2GB
gemma-2b-it-finetuned-mental-health-qa.IQ3M.ggufIQ3M1.22GB
gemma-2b-it-finetuned-mental-health-qa.Q3K.ggufQ3K1.29GB
gemma-2b-it-finetuned-mental-health-qa.Q3KM.ggufQ3K_M1.29GB
gemma-2b-it-finetuned-mental-health-qa.Q3KL.ggufQ3K_L1.36GB
gemma-2b-it-finetuned-mental-health-qa.IQ4XS.ggufIQ4XS1.4GB
gemma-2b-it-finetuned-mental-health-qa.Q40.ggufQ401.44GB
gemma-2b-it-finetuned-mental-health-qa.IQ4NL.ggufIQ4NL1.45GB
gemma-2b-it-finetuned-mental-health-qa.Q4KS.ggufQ4K_S1.45GB
gemma-2b-it-finetuned-mental-health-qa.Q4K.ggufQ4K1.52GB
gemma-2b-it-finetuned-mental-health-qa.Q4KM.ggufQ4K_M1.52GB
gemma-2b-it-finetuned-mental-health-qa.Q41.ggufQ411.56GB
gemma-2b-it-finetuned-mental-health-qa.Q50.ggufQ501.68GB
gemma-2b-it-finetuned-mental-health-qa.Q5KS.ggufQ5K_S1.68GB
gemma-2b-it-finetuned-mental-health-qa.Q5K.ggufQ5K1.71GB
gemma-2b-it-finetuned-mental-health-qa.Q5KM.ggufQ5K_M1.71GB
gemma-2b-it-finetuned-mental-health-qa.Q51.ggufQ511.79GB
gemma-2b-it-finetuned-mental-health-qa.Q6K.ggufQ6K1.92GB
gemma-2b-it-finetuned-mental-health-qa.Q80.ggufQ802.49GB

Original model description:



datasets:
  • Amod/mentalhealthcounseling_conversations

library_name: transformers
license: mit

Model Card Summary


This model is a fine-tuned version of gemma-2b-it for mental health counseling conversations.
It was fine-tuned on the Amod/Mental Health Counseling Conversations dataset,
which contains dialogues related to mental health counseling.

Model Details

Model Description

This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Miguel Flores
  • Model type: Causal Language Model (based on transformers)
  • Language(s) (NLP): English
  • License: MIT License
  • Finetuned from model gemma-2b-it: gemma-2b-it, which is a base model fine-tuned for mental health-related queries.

Use Cases

Direct Use


This model is fine-tuned for generating responses related to mental health counseling tasks.
It can be used for providing suggestions, conversation starters, or follow-ups in mental health scenarios.

Downstream Use


This model can be adapted for use in more specific counseling-related tasks,
or in applications where generating mental health-related dialogue is necessary.

Out-of-Scope Use


The model is not intended to replace professional counseling.
It should not be used for real-time crisis management or any situation
requiring direct human intervention. Use in highly critical or urgent care situations is out of scope.

Bias, Risks, and Limitations

The model was trained on mental health-related dialogues, but it may still generate biased or
inappropriate responses. Users should exercise caution when interpreting or acting on the model's outputs,
particularly in sensitive scenarios.

Recommendations

The model should not be used as a replacement for professional mental health practitioners.
Users should carefully evaluate generated responses in the context of their use case.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("GuelGaMesh01/gemma-2b-it-finetuned-mental-health-qa")
model = AutoModelForCausalLM.from_pretrained("GuelGaMesh01/gemma-2b-it-finetuned-mental-health-qa")

Example inference

inputs = tokenizer("How can I manage anxiety better?", returntensors="pt") outputs = model.generate(inputs, maxlength=200) response = tokenizer.decode(outputs[0], skipspecialtokens=True) print(response)

Training Details

Training Data

The model was trained on the Amod/Mental Health Counseling Conversations dataset,
which consists of mental health dialogues focused on counseling situations.

Training Procedure

The model was fine-tuned using LoRA (Low-Rank Adaptation) with the following hyperparameters:

Batch Size: 1
Gradient Accumulation Steps: 4
Learning Rate: 2e-4
Epochs: 3
Max Sequence Length: 2500 tokens
Optimizer: pagedadamw8bit

Evaluation

#### Testing Data

The model was evaluated using a split from the training data,
specifically a 10% test split of the original training dataset.

#### Metrics

The following metrics were used during the training and evaluation process:

  • Training Loss: The training loss was tracked during training to monitor how well the model was learning from the data. It decreased throughout the epochs.
  • Semantic Similarity: Semantic similarity was employed as the primary metric to assess the model’s ability to generate contextually relevant and meaningful responses. Since the dataset involves conversational context, particularly in the sensitive area of mental health counseling, it was crucial to evaluate how well the model understands and retains the intent and meaning behind the input rather than merely focusing on fluency or token-level prediction.
  • Perplexity: Perplexity was used as a metric to evaluate the model's ability to generate coherent and fluent text responses. The model was evaluated on a subset of the test data, and both non-finetuned and finetuned perplexities were compared.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
gemma-2b-it-finetuned-mental-health-qa.IQ3_M.gguf
LFS Q3
1.22 GB Download
gemma-2b-it-finetuned-mental-health-qa.IQ3_S.gguf
LFS Q3
1.2 GB Download
gemma-2b-it-finetuned-mental-health-qa.IQ3_XS.gguf
LFS Q3
1.16 GB Download
gemma-2b-it-finetuned-mental-health-qa.IQ4_NL.gguf
LFS Q4
1.45 GB Download
gemma-2b-it-finetuned-mental-health-qa.IQ4_XS.gguf
LFS Q4
1.4 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q2_K.gguf
LFS Q2
1.08 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q3_K.gguf
LFS Q3
1.29 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q3_K_L.gguf
LFS Q3
1.36 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q3_K_M.gguf
LFS Q3
1.29 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q3_K_S.gguf
LFS Q3
1.2 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q4_0.gguf
Recommended LFS Q4
1.44 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q4_1.gguf
LFS Q4
1.56 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q4_K.gguf
LFS Q4
1.52 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q4_K_M.gguf
LFS Q4
1.52 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q4_K_S.gguf
LFS Q4
1.45 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q5_0.gguf
LFS Q5
1.68 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q5_1.gguf
LFS Q5
1.79 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q5_K.gguf
LFS Q5
1.71 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q5_K_M.gguf
LFS Q5
1.71 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q5_K_S.gguf
LFS Q5
1.68 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q6_K.gguf
LFS Q6
1.92 GB Download
gemma-2b-it-finetuned-mental-health-qa.Q8_0.gguf
LFS Q8
2.49 GB Download