Research Literacy for Ayurveda: What You Absolutely Must Know

Dr. Aakash Kembhavi, MD (Ayu), PGDMLS, MS (Counselling & Psychotherapy)

“An expert is a person who has made all the mistakes that can be made in a very narrow field.” — Niels Bohr

This article was produced with AI collaboration assistance under the Ayurveda Unfiltered editorial framework. AI tools were used for structural drafting and language refinement. Final intellectual framing, critical perspective, and thematic direction are those of the named author.

Introduction: You Don’t Have to Be a Researcher to Think Like One

Not every Ayurveda teacher, PG scholar, or final-year BAMS student will conduct a landmark clinical trial. Most will not. But every single one of you will read research papers — or be asked to cite them. You will supervise dissertations, evaluate evidence for clinical decisions, attend conferences where studies are presented, and be expected to contribute to a tradition that is increasingly being asked to justify itself in scientific terms.

Research literacy is not the same as conducting research. It is the ability to understand what a study is claiming, assess whether the claim is justified, and recognise when someone — including yourself — is being misled by numbers that look convincing but are not.

This article covers the bare minimum you need. Not to pass an exam. To think clearly.

And at every step, we will ask the same uncomfortable question: how does this apply to the research being produced in our own field?

Section 1: The Mindset Before the Method

Before any tool or technique, research requires a specific mental posture: organised scepticism. This means you do not accept a claim because it comes from an authority, a classic text, or a paper with impressive-looking statistics. You ask: what is the evidence, how was it obtained, and how strong is it?

This is not cynicism. It is the minimum standard of intellectual honesty that any tradition claiming clinical validity must meet.

In Ayurveda, this mindset is particularly important because our field has developed a cultural habit of treating published research as validation rather than investigation. When a paper appears to support a classical claim, it is celebrated and shared widely. When a paper challenges a classical claim, it is quietly set aside or dismissed as methodologically flawed. A research-literate mind does the opposite: it scrutinises the supportive paper just as hard as the critical one.

The honest question every Ayurveda scholar must sit with: Are we producing research to discover the truth about our interventions, or to confirm what we already believe?

Section 2: The Language of Research

Every field has its vocabulary. Research methodology is no different. Here are the foundational terms you must know — not as definitions to memorise, but as concepts to understand.

Research Question: The specific, answerable question your study is designed to address. “Does Triphala work?” is not a research question. “Does a standardised aqueous extract of Triphala, administered orally at 5g per day for 12 weeks, reduce fasting blood glucose levels in adults with Type 2 diabetes compared to placebo?” is a research question.

The specificity is not pedantry. It is what makes the answer meaningful.

Hypothesis: A formal, testable prediction. Every study tests a null hypothesis — the assumption that there is no effect — against an alternative hypothesis — the assumption that there is an effect. The study either provides sufficient evidence to reject the null hypothesis or it does not. It never “proves” the alternative hypothesis true. This is a subtle but critical distinction.

Variables:

  • The independent variable is what you manipulate or observe as a potential cause (the intervention or exposure).
  • The dependent variable is what you measure as the outcome (the effect).
  • A confounding variable is something that influences both the independent and dependent variables and can create a false appearance of a relationship between them.

In Ayurveda research, confounders are almost never adequately controlled. A study showing that patients receiving Panchakarma improved also had dietary changes, lifestyle modifications, and increased clinical attention during the treatment period. Which of these caused the improvement? Without controlling for confounders, you cannot say.

The PICO Framework: Every clinical research question should be framed as:

  • P — Population (who?)
  • I — Intervention (what treatment or exposure?)
  • C — Comparator (compared to what?)
  • O — Outcome (measured how, and when?)

Most Ayurveda dissertations fail at this first step. The question is too broad, the outcome is vaguely defined, and the comparator is either absent or poorly chosen.

Section 3: Study Designs — What Each One Can and Cannot Prove

The design of a study determines the strength of the conclusions it can support. This is not optional knowledge — it is the single most important skill in reading a research paper.

The Evidence Hierarchy (from weakest to strongest):

  • Expert opinion and case reports
  • Cross-sectional studies
  • Case-control studies
  • Cohort studies
  • Randomised Controlled Trials (RCTs)
  • Systematic reviews and meta-analyses of RCTs

Cross-sectional study: A snapshot. You measure exposure and outcome at the same point in time. You can identify associations but cannot establish cause and effect because you do not know which came first.

Case-control study: You start with people who have the outcome (cases) and compare them with people who do not (controls), looking backwards at their exposures. Useful for rare diseases. Prone to recall bias — people with a disease remember their past exposures differently than healthy people do.

Cohort study: You start with people who have different exposures and follow them forward in time to see who develops the outcome. This establishes temporal sequence, which is essential for causation. But it is expensive and slow, and confounders are difficult to fully control.

Randomised Controlled Trial (RCT): The gold standard for establishing causation. Participants are randomly assigned to intervention or control groups, distributing both known and unknown confounders equally. When well-conducted, an RCT can confidently attribute an outcome to an intervention.

Systematic Review / Meta-analysis: A formal synthesis of all existing evidence on a question, using explicit, reproducible methods. The most reliable form of evidence when multiple good-quality studies exist.

In Ayurveda, the overwhelming majority of published studies are small, uncontrolled, single-centre trials with no randomisation, no blinding, and no pre-specified outcomes. This places them at or near the bottom of the evidence hierarchy — not because Ayurveda is incapable of generating better evidence, but because the institutional culture has not yet demanded it.

Section 4: Sampling — Who Gets Into the Study

A study can only tell you about the people in it. Whether its findings apply to people outside it depends entirely on how those people were selected.

Population is the entire group you are interested in. Sample is the subset you actually study. The sample must be representative of the population for the findings to be generalisable.

Probability sampling (random sampling, stratified sampling) gives every member of the population a calculable chance of being selected. It minimises selection bias and supports generalisation.

Non-probability sampling (convenience sampling — taking whoever is available) is easy but dangerous. If you recruit only patients attending your own outpatient department, your findings apply to patients attending your outpatient department. Not to Ayurvedic patients broadly. Not to the general population.

Sample size is not a number you choose based on convenience or because your senior told you “30 is enough.” It is calculated mathematically based on the expected effect size, the acceptable error rates, and the desired statistical power. A study that is too small will miss a real effect. A study that is too large wastes resources and exposes more participants to risk than necessary.

The vast majority of Ayurveda dissertations use convenience samples of 30 patients per group because that is the institutional norm. There is no statistical justification for this number in most of the studies where it appears. It is tradition masquerading as methodology.

Section 5: Data — Types, Collection, and Quality

Not all data are the same, and the type of data you collect determines which statistical tests you can use.

Nominal data: Categories with no order. Blood group (A, B, O, AB). Gender. Prakriti type. You can count how many fall in each category. You cannot calculate a meaningful average.

Ordinal data: Categories with a meaningful order but unequal intervals. Pain scale (mild, moderate, severe). Grading of a symptom. You can rank, but you cannot assume the gap between “mild” and “moderate” is the same as between “moderate” and “severe.”

Interval data: Ordered, with equal intervals, but no true zero. Temperature in Celsius. The difference between 20°C and 30°C is the same as between 30°C and 40°C, but 0°C does not mean “no temperature.”

Ratio data: Ordered, equal intervals, and a true zero. Height, weight, blood glucose, haemoglobin. You can calculate means, standard deviations, and ratios meaningfully.

Data quality depends on two properties:

  • Validity: Does the tool measure what it claims to measure?
  • Reliability: Does it produce consistent results when used repeatedly under the same conditions?

In Ayurveda research, outcome measures are frequently neither valid nor reliable. A researcher who creates their own symptom scoring scale for a dissertation, uses it once, and reports results as if it were a validated instrument is not generating data. They are generating the appearance of data.

Section 6: Biostatistics — The Bare Minimum

Statistics are a tool for managing uncertainty. They do not eliminate uncertainty — they quantify it honestly. Here is what you must understand.

Descriptive Statistics describe your data as it is. The mean is the average. The median is the middle value when data are ranked. The mode is the most frequent value. The standard deviation (SD) tells you how spread out the values are around the mean. A small SD means values cluster tightly around the mean. A large SD means they are widely dispersed.

When data are normally distributed (the classic bell curve), mean and SD together give you a complete picture. When data are skewed — as biological data often are — the median and interquartile range are more honest descriptors.

Inferential Statistics allow you to draw conclusions about a population from a sample. The core question is always: could the result I observed have occurred by chance alone?

The p-value is the probability of observing a result at least as extreme as the one obtained, assuming the null hypothesis is true. A p-value of 0.03 means there is a 3% probability that the result occurred by chance if the null hypothesis is true. By convention, p < 0.05 is considered “statistically significant.”

What the p-value does NOT tell you:

  • How large or clinically meaningful the effect is
  • Whether the study was well-designed
  • Whether the finding will replicate
  • Whether the null hypothesis is actually true or false

The p-value is probably the most misunderstood and misused number in all of clinical research. In Ayurveda publications, p < 0.05 is routinely treated as proof of efficacy. It is not.

Confidence Intervals (CI) are more informative than p-values. A 95% CI gives you a range of values within which the true effect size is likely to fall in 95% of identical studies. A narrow CI means your estimate is precise. A wide CI means there is substantial uncertainty. If a 95% CI for a drug’s effect on blood pressure reduction is 2–18 mmHg, you know the effect is real (the interval does not cross zero), but you also know it is quite imprecise.

Type I Error (False Positive): You conclude there is an effect when there is none. The probability of this is your alpha level — typically 0.05.

Type II Error (False Negative): You conclude there is no effect when there actually is one. The probability of this is beta — typically 0.10 to 0.20. Statistical power (1 - beta) is the probability of correctly detecting a real effect. Most small Ayurveda trials are severely underpowered, meaning they would miss a real effect even if one existed.

Section 7: Reading a Results Section Without Getting Lost

A results section contains numbers. Your job is to understand what those numbers mean clinically, not just statistically.

Relative Risk (RR): The ratio of the probability of an outcome in the exposed group to the probability in the unexposed group. An RR of 0.5 means the intervention halves the risk.

Absolute Risk Reduction (ARR): The actual difference in event rates between groups. If 20% of the control group and 10% of the treatment group had the outcome, the ARR is 10 percentage points.

Number Needed to Treat (NNT): How many patients must receive the intervention for one additional patient to benefit. NNT = 1 / ARR. An NNT of 10 means you treat 10 patients to prevent one adverse outcome. Lower NNT = more clinically useful intervention.

Why this matters: A drug that reduces relative risk by 50% sounds impressive. If the baseline risk is 2%, a 50% relative reduction means absolute risk drops from 2% to 1% — an ARR of 1% and an NNT of 100. Whether that is clinically worthwhile depends on the severity of the condition, the cost of treatment, and the side effect profile. Relative risk figures without absolute risk context are routinely used to make modest effects appear dramatic.

Forest plots display the results of multiple studies simultaneously. Each study is represented by a horizontal line (the confidence interval) and a central box (the point estimate). The diamond at the bottom represents the pooled result. If the diamond does not cross the vertical line of no effect, the pooled result is statistically significant.

Statistical significance vs clinical significance: A study with 10,000 participants may find a statistically significant reduction in a biomarker of 0.1 units — real, but clinically meaningless. A small pilot study may find a clinically meaningful effect that fails to reach statistical significance due to inadequate power. These are different problems. Always ask: even if this result is real, does it matter to the patient in front of me?

Section 8: Reading a Research Paper Critically

Most people read abstracts. Abstracts are written to impress. The methods section is where the truth lives.

IMRAD Structure: Introduction, Methods, Results, and Discussion. This is the standard format of a research paper. The Methods section tells you exactly what was done. If it is vague, brief, or evasive, treat the results with proportional scepticism.

What to check in the Methods:

  • Is the study design clearly stated and appropriate for the question?
  • How were participants selected and allocated?
  • Was there blinding? If not, why not?
  • What are the primary and secondary outcomes, and were they pre-specified?
  • How was sample size justified?
  • How were missing data handled?

What to check in the Results:

  • Are baseline characteristics of groups comparable (especially in RCTs)?
  • Are all enrolled participants accounted for at the end?
  • Are effect sizes reported with confidence intervals, not just p-values?

What to check in the Discussion:

  • Do the conclusions match the results, or do they overreach?
  • Are limitations acknowledged honestly?
  • Are conflicts of interest declared?

Journal quality: Not all journals are equal. Predatory journals — those that charge publication fees with little or no peer review — have proliferated in India and have found a willing market in Ayurveda research. A paper published in a journal whose name you have never heard of, with a processing time of two weeks, and which publishes on every topic imaginable, is not peer-reviewed evidence. It is a paid placement. Recognising predatory journals is now a core research literacy skill.

Section 9: The Ethics Spine

Research ethics are not paperwork. They are the moral foundation of the entire enterprise.

Informed consent means every participant understands what the study involves, what risks exist, that participation is voluntary, and that they can withdraw at any time without penalty. Obtaining consent from patients who are economically dependent on a free hospital, who cannot read the consent form, or who are pressured by a treating physician to participate is not ethical consent — it is a procedural simulacrum of consent.

Ethics Committee approval must be obtained before the study begins. Obtaining it retrospectively — a practice that is disturbingly common in Ayurveda dissertations — is a fundamental violation of research ethics.

Data fabrication and falsification — inventing or manipulating data — are research misconduct. They are not shortcuts. They are fraud.

Authorship should be based on the ICMJE criteria: substantial contribution to conception, design, data acquisition, or analysis; drafting or critically revising the manuscript; final approval; and accountability for the work. Gift authorship — including a department head’s name because it is expected — and ghost authorship — excluding the person who actually wrote the paper — are both violations.

Plagiarism is presenting someone else’s words or ideas as your own. In an era of plagiarism detection software, the physical act of copying is detectable. But conceptual plagiarism — taking someone’s framework, argument, or original insight without attribution — is equally dishonest and far less detectable.

Section 10: Ayurveda-Specific Research Challenges

Every tradition faces specific methodological challenges when it enters the research arena. These are ours — stated plainly.

Operationalising classical constructs: How do you measure Prakriti in a way that two researchers in different cities will arrive at the same result for the same patient? How do you define Agni dourbalya in terms that a statistician can work with? How do you standardise Vata-Pitta-Kapha assessments across investigators? These are not trivial problems. They are unsolved problems. Studies that use these constructs as variables without addressing their measurement validity are building on sand.

Intervention standardisation: A classical formulation like Dashamoola Kwatha made in a hospital pharmacy in Bengaluru is not the same product as one made in a pharmacy in Jamnagar. Raw material variability, processing differences, and storage conditions create products that may have significantly different compositions. A trial testing “Dashamoola Kwatha” without characterising the actual product — through phytochemical analysis, HPLC fingerprinting, or at minimum a detailed preparation protocol — is not a drug trial. It is a label trial.

Blinding challenges: Many Ayurvedic interventions — Panchakarma procedures, distinctive-tasting formulations, dietary modifications — cannot be meaningfully blinded. This is a genuine methodological constraint. The honest response is to acknowledge it, use active comparators where possible, use blinded outcome assessors, and be explicit about what the lack of blinding means for interpreting results. The dishonest response is to not mention it.

Outcome selection: Ayurvedic practice operates through a different conceptual framework than biomedical practice. Not every Ayurvedic intervention should be evaluated on biomedical biomarkers alone. Patient-reported outcomes, quality of life measures, and functional assessments may be more appropriate endpoints for some questions. But these instruments must be validated, and the choice of outcome must be justified before the study begins — not after the biomarker results are disappointing.

Section 11: The EQUATOR Network — A Free Toolkit Every Scholar Must Know

There is a global initiative that has done something extraordinarily useful: it has collected, organised, and made freely available the reporting standards for virtually every type of health research study that exists. It is called the EQUATOR Network — which stands for Enhancing the QUAlity and Transparency Of health Research — and it is available to anyone with an internet connection at www.equator-network.org.

The EQUATOR Network was established to improve the reliability of medical publications by promoting transparent and accurate reporting of health research. It is not a journal, not a regulatory body, and not a paid subscription service. It is a publicly accessible resource centre that any researcher, student, or teacher can use — today, for free.

What Does the EQUATOR Network Actually Contain?

The EQUATOR Network publishes over 250 research reporting guidelines and operates three centres in the United Kingdom, Canada, and France that raise awareness and support adoption of good research reporting practices. The website maintains a comprehensive, up-to-date list of guidelines and a series of toolkits designed for authors, editors, developers, librarians, and teachers.

A reporting guideline is defined as a checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology. In plain language: for every type of study you might conduct or read — an RCT, an observational study, a systematic review, a case report, a diagnostic accuracy study — there is a checklist that tells you exactly what information must be present for that study to be considered adequately reported.

The Core Guidelines Every Ayurveda Scholar Must Know

These are the most important reporting guidelines, each matched to a study type:

CONSORT (Consolidated Standards of Reporting Trials) — for Randomised Controlled Trials. This is the gold standard checklist for RCTs. It specifies exactly what must be reported: how randomisation was done, how allocation was concealed, how blinding was implemented, how the sample size was calculated, how missing data were handled, and how adverse events were recorded and reported. If a published RCT does not satisfy the CONSORT checklist, it is an inadequately reported trial — regardless of how impressive its results look.

STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) — for cohort studies, case-control studies, and cross-sectional studies. Most Ayurveda clinical studies are observational. STROBE specifies what must be disclosed about participant selection, exposure measurement, outcome definition, confounding control, and statistical methods.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) — for systematic reviews and meta-analyses. If you are writing a literature review that claims to be systematic, PRISMA tells you exactly what that means and what your paper must contain. A narrative review that cherry-picks favourable citations is not a systematic review, regardless of what the title says.

CARE — for case reports. Even a single case report has a reporting standard. The patient’s presenting concerns, clinical findings, timeline, diagnostic assessment, therapeutic interventions, and follow-up must all be documented according to a specified structure.

ARRIVE — for animal research. Studies on animal models — including in vivo pharmacological studies on Ayurvedic formulations — must meet this standard.

In 2015, EQUATOR created a simple flow chart to help authors, editors, and peer reviewers find the most appropriate checklist and reporting guideline, and subsequently launched an online wizard called GoodReports (www.goodreports.org) which includes 16 of the most commonly used reporting guidelines. If you are unsure which guideline applies to your study, this wizard will tell you in under a minute.

Why Does This Matter for Reading Papers?

The EQUATOR guidelines are not just for authors. They are equally powerful tools for readers. When you read a published Ayurveda research paper, you can open the relevant EQUATOR checklist and go through it item by item. Does this paper tell you how the sample size was calculated? Does it state whether allocation was concealed? Does it report what happened to participants who dropped out? Does it declare conflicts of interest?

If the answer to most of these questions is “no,” you are reading an inadequately reported study. The absence of information is itself information — and it should substantially reduce your confidence in the conclusions.

The aim of a reporting guideline is to ensure that readers understand the text, research can be replicated by other researchers, that the research can be included in a systematic review, and that it can aid doctors in making clinical decisions. A study that cannot be replicated because its methods are too vaguely described has not contributed to knowledge. It has only contributed to the citation count.

The EQUATOR Mirror for Ayurveda Research

Here is the uncomfortable truth. If you were to take the published output of any Ayurveda PG dissertation from the past decade and hold it against the CONSORT or STROBE checklist, the majority of required items would be absent, vague, or unverifiable. This is not an opinion — it is a checkable, reproducible finding that any scholar can verify themselves.

Reporting guidelines are still not widely supported by medical journals or adhered to by researchers, and thus their potential impact is lessened. This is true even in mainstream biomedical research. In Ayurveda research, adherence to these standards is virtually non-existent — not because scholars are incapable of meeting them, but because no one has made it a requirement, and most scholars are unaware that these standards exist.

This needs to change. Every Ayurveda PG guide should require their scholars to submit a completed EQUATOR checklist alongside their dissertation. Every Ayurveda journal editor should require it as part of manuscript submission. Every final-year BAMS student writing a research protocol should be introduced to it before they design their first study.

The EQUATOR Network is free. It is comprehensive. It is the global standard. There is no justification for any health researcher in any tradition to be unaware of it.

Visit: www.equator-network.org Use the guideline wizard: www.goodreports.org

Section 12: The State of Ayurveda Research — A Frank Audit

This section is not written to embarrass anyone. It is written because the gap between where Ayurveda research currently stands and where it needs to be cannot be closed if it is not first honestly described.

Mistake 1: The 30-patient convenience sample as standard Across thousands of Ayurveda PG dissertations, the sample size of 30 patients per group has become a ritual rather than a calculation. It has no statistical justification in most of the contexts in which it appears. It is too small to detect moderate effects, too small to assess safety signals, and too small to be generalisable to any population. The dissertation is completed, the degree is awarded, and the “study” enters the citation pool — where it will be cited by the next researcher as supporting evidence.

Mistake 2: No control group, or an inadequate one A study that shows patients improved after receiving an Ayurvedic intervention, with no control group, has shown only that patients improved. It has not shown that the intervention caused the improvement. Patients improve over time for many reasons: natural disease progression, regression to the mean, concurrent treatments, the therapeutic effect of attention and care. Without a control group, none of these alternatives can be excluded.

Mistake 3: Unvalidated outcome measures Researchers routinely construct their own symptom scoring tools — assigning numerical scores to clinical features — without any validation process. These tools have not been tested for inter-rater reliability (do two observers score the same patient the same way?), test-retest reliability (does the same observer score the same patient consistently on two occasions?), or content validity (does the scale actually capture the construct it claims to capture?). Results produced by such tools are not data — they are the researcher’s subjective impressions expressed in numerical form.

Mistake 4: Post-hoc outcome switching The primary outcome of a study should be decided before data collection begins and should not change based on which variable happened to show a significant result. In many Ayurveda studies — particularly dissertations — the outcome that reached p < 0.05 becomes the “primary outcome” in the write-up, while outcomes that showed no significant change are quietly moved to secondary status or omitted. This is not analysis. It is storytelling with numbers.

Mistake 5: Treating statistical significance as proof of efficacy The appearance of p < 0.05 in a results table is routinely used, in conclusions and in institutional communication, as evidence that a treatment works. As discussed in Section 6, this is a fundamental misunderstanding of what a p-value means. A p-value of 0.04 in an underpowered, unblinded, uncontrolled trial with an unvalidated outcome measure tells you almost nothing about whether the treatment is genuinely effective.

Mistake 6: Publishing in predatory journals and citing them as evidence The proliferation of low-quality open-access journals that charge publication fees with minimal or no peer review has provided an easy outlet for methodologically weak Ayurveda research. Papers published in such journals have not been independently reviewed, have not met any quality standard, and cannot be cited as scientific evidence. The practice of building literature reviews from such citations — and then claiming that “multiple studies support” a finding — creates a false impression of an evidence base where none exists.

Mistake 7: Retroactive validation claiming This is perhaps the most intellectually dishonest pattern in Ayurveda research. A modern scientific finding is published — say, on the gut microbiome, or on autophagy, or on the anti-inflammatory properties of a phytochemical. Within months, Ayurveda papers appear claiming that this finding “validates” or “proves” a classical concept. The classical concept is cited, the modern paper is cited, and the conclusion is drawn that Ayurveda knew this all along. This is not research. It is motivated reasoning dressed in academic clothing. The classical authors did not know what the gut microbiome is. Claiming otherwise does not honour the tradition — it diminishes it.

Mistake 8: Ethical shortcuts Retrospective ethics approvals, inadequate informed consent procedures, inclusion of students or economically dependent patients without adequate safeguards, and ghost authorship are more common than the field acknowledges. These are not minor procedural failures. They are violations of the foundational contract between researcher and participant.

Conclusion: Literacy Is the Starting Point, Not the Destination — But the EQUATOR Network Is Where You Begin

Understanding these concepts will not make you a researcher overnight. But it will make you a more honest practitioner, a more rigorous teacher, a more critical reader, and a more credible contributor to the ongoing conversation about what Ayurveda is, what it can do, and what it still needs to demonstrate.

Before you write your next paper, open www.equator-network.org, find the checklist that matches your study design, and go through it line by line. Before you cite a paper in your next literature review, hold it against the same checklist and ask whether it meets the minimum standard of reporting. This single habit, if adopted consistently, would improve the quality of Ayurveda research output more than any curriculum revision or regulatory mandate.

The tradition does not need defenders who protect it from scrutiny. It needs scholars who are literate enough to apply scrutiny honestly — including to their own work — and brave enough to act on what they find.

Research literacy is not a threat to Ayurveda. It is the only path by which Ayurveda can make claims that the world is obliged to take seriously.

That path begins with knowing the difference between a finding and a fact, between a p-value and proof, and between evidence that was sought and evidence that was manufactured.

Start there.

Ayurveda Unfiltered is a forum for critical engagement with Ayurvedic education, practice, and research. Articles are posted on this blog every week. A companion WhatsApp discussion community engages with each article through structured weekend conversations. If you would like to join the discussion, reach out through the blog.

© 2026 Astanga Wellness Pvt. Ltd. All rights reserved.


Share your thoughts in the comments below.