Sodium and phosphorus-based food additives are among the most commonly consumed nutrients in the world. This is because both have diverse applications in processed food manufacturing, leading to their widespread utilization by the food industry. Since most foods are naturally low in salt, sodium additives almost completely account for the excessive consumption of sodium throughout the world. Similarly, phosphorus additives represent a major and “hidden” phosphorus load in modern diets. These factors pose a major barrier to successfully lowering sodium or phosphorus intake in patients with chronic kidney disease. As such, any serious effort to reduce sodium or phosphorus consumption will require reductions in the use of these additives by the food industry. The current regulatory environment governing the use of food additives does not favor this goal, however, in large part because these additives have historically been classified as generally safe for public consumption. To overcome these barriers, coordinated efforts will be needed to demonstrate that high intakes of these additives are not safe for public consumption and as such, should be subject to greater regulatory scrutiny.
nutrition; diet; sodium; phosphorus; chronic kidney disease
Protein energy wasting (PEW) is highly prevalent in patients undergoing maintenance hemodialysis (MHD) patients. Importantly, there is a robust association between the extent of PEW and the risk of hospitalization and death in these patients, regardless of the nutritional marker used. The multiple etiologies of PEW in advanced kidney disease are still being elucidated. Apart from the multiple mechanisms that might lead to PEW, it appears that the common pathway for all the derangements is related to exaggerated protein degradation along with decreased protein synthesis. The hemodialysis procedure per se is an important contributor to this process. Metabolic and hormonal derangements such as acidosis, inflammation and resistance to anabolic properties of insulin resistance and growth hormone are all implicated for the development of PEW in MHD patients. Appropriate management of MHD patients at risk for PEW requires a comprehensive combination of strategies to diminish protein and energy depletion, and to institute therapies that will avoid further losses. The mainstay of nutritional treatment in MHD patients is provision of an adequate amount of protein and energy, using oral supplementation as needed. Intradialytic parenteral nutrition should be attempted in patients who cannot use the gastrointestinal tract efficiently. Other anabolic strategies such as exercise, anabolic hormones, anti-inflammatory therapies and appetite stimulants can be considered as complementary therapies in suitable patients.
Nonvolatile acid is produced from the metabolism of organic sulfur in dietary protein, and the production of organic anions during the combustion of neutral foods. Organic anion salts that are found primarily in plant foods are directly absorbed in the gastrointestinal tract and yield bicarbonate. The difference between endogenously produced nonvolatile acid and absorbed alkali precursors yields the dietary acid load, technically known as the net endogenous acid production, and must be excreted by the kidney to maintain acid-base balance. Although typically around 1 mEq/kg/day, dietary acid load is lower with greater intake of fruits and vegetables. In the setting of chronic kidney disease, a high dietary acid load invokes adaptive mechanisms to increase acid excretion despite reduced nephron number, such as increased per nephron ammoniagenesis and augmented distal acid excretion mediated by the renin-angiotensin system and endothelin-1. These adaptations may promote renal injury. Additionally, high dietary acid loads produce low-grade, subclinical acidosis that may result in bone and muscle loss. Early studies suggest that lowering the dietary acid load can improve subclinical acidosis, preserve bone and muscle, and slow decline of glomerular filtration rate in animal models and humans. Studies focusing on hard clinical outcomes are needed.
chronic kidney disease; nutrition; metabolic acidosis; net endogenous acid production
Although the use of renal replacement therapy (RRT) to support critically ill patients with AKI has become routine, many of the fundamental questions regarding optimal management of RRT remain. This review summarizes current evidence regarding the timing of initiation of renal replacement therapy, the selection of the specific modality of RRT, and prescription of intensity of therapy. While absolute indications for initiating RRT such as hyperkalemia and overt uremic symptoms are well recognized, the optimal timing of therapy in patients without these indications continues to be the subject of debate. There does not appear to be a difference in either mortality or recovery of kidney function associated with the various modalities of RRT. Finally, providing higher doses of RRT is not associated with improved clinical outcomes.
Acute Kidney Injury; Hemodialysis; Hemofiltration; Continuous Renal Replacement Therapy; Critical Illness
In addition to its role as a metabolic waste product, uric acid has been proposed to be an important molecule with multiple functions in human physiology and pathophysiology and may be linked to human diseases beyond nephrolithiasis and gout. Uric acid homeostasis is determined by the balance between production, intestinal secretion, and renal excretion. The kidney is an important regulator of circulating uric acid levels, by reabsorbing around 90% of filtered urate, while being responsible for 60–70% of total body uric acid excretion. Defective renal handling of urate is a frequent pathophysiologic factor underpinning hyperuricemia and gout. In spite of tremendous advances over the past decade, the molecular mechanisms of renal urate transport are still incompletely understood. Many transport proteins are candidate participants in urate handling, with URAT1 and GLUT9 being the best characterized to date. Understanding these transporters is increasingly important for the practicing clinician as new research unveils their physiology, importance in drug action, and genetic association with uric acid levels in human populations. The future may see the introduction of new drugs that specifically act on individual renal urate transporters for the treatment of hyperuricemia and gout.
Urate; hypouricemia; hyperuricemia; URAT1; GLUT9
Focal segmental glomerulosclerosis (FSGS) is one of the most common forms of acquired glomerular disease leading to end stage kidney disease (ESKD). Its incidence is rising around the world. There is no proven therapy for those patients who do not respond to corticosteroids and it can recur in 20–25% of patients who receive a kidney transplant. The disease can be primary or secondary to various conditions including vesicoureteral reflux, obesity, medications, and infections. Recent advances have demonstrated the important role of genetic mutations in podocyte proteins as a cause of FSGS. There is an urgent need for randomized clinical trials to develop safe and effective therapy for FSGS that occurs in the native or transplanted kidney.
At the 2010 Conference on Living Kidney Donor Follow-up, a workgroup was convened to comment on the state of the evidence in four broad areas: (a) health-related quality of life postdonation; (b) donors’ financial and economic concerns; (c) outcomes issues specific to newer areas of donation, namely kidney exchange and anonymous (directed and nondirected) donation; and (d) the role of informed consent in relation to postdonation psychosocial outcomes. The workgroup sought to offer recommendations regarding research priorities for the next decade, and data collection strategies to accomplish the needed research. The workgroup concluded that there has been little consideration of the nature or predictors of any long-term psychosocial outcomes in living donors. In some areas (e.g., kidney exchange and anonymous donation) there is very limited information on outcomes even in the early aftermath of donation. Across all four psychosocial areas, prospective studies are needed that follow donors in order to examine the course of development and/or resolution of any donation-related difficulties. The formation of a national registry to routinely collect psychosocial follow-up data may be an efficient strategy to monitor donor outcomes in both the short-and long-term years after donation.
living kidney donation; psychosocial outcomes; financial outcomes; quality of life; informed consent
Living donation is a common procedure in the United States. Substantial variation exists among transplant centers in their protocols and exclusion criteria for potential living donors. In the absence of clinical trial data to guide decisions about exclusion criteria, knowledge of current practices is an important first step in guiding the formulation of donor protocols as well as future studies. Certain trends in live donation practices have become apparent from surveys of transplant programs over the past few decades. Over the past 25 years, opposition in the US to living unrelated donation has gone from strong to essentially nonexistent. With respect to donor age, programs have become less strict regarding upper age limits, but stricter regarding younger donor candidates. Protocols regarding kidney function, blood pressure and diabetes screening also continue to evolve. Although donor follow up is mandated by the OPTN for two years after donation, a majority of donors are lost to follow up by one year. The most commonly cited barriers to donor follow up include donor inconvenience, cost issues including reimbursement to care providers, as well as direct and indirect costs to donors. Here, we review the current knowledge about living donor practices in the U.S.
Living Donors; Kidney Transplantation; Donor Screening; Donor Exclusion; Kidney Function Tests
In the United States, racial-ethnic minorities experience disproportionately high rates of end stage renal disease, but they are substantially less likely to receive living donor kidney transplants (LDKT) compared with their majority counterparts. Minorities may encounter barriers to LDKT at several steps along the path to receiving LDKT including consideration, pursuit, completion of LDKT, and the post-LDKT experience. These barriers operate at different levels related to potential recipients and donors, health care providers, health system structures, and communities. In this review, we present a conceptual framework describing various barriers minorities face along the path to receiving LDKT. We also highlight promising recent and current initiatives to address these barriers, as well as gaps in initiatives, which may guide future interventions to reduce racial-ethnic disparities in LDKT.
race; ethnicity; disparities; minority; organ donation; barriers to living kidney donation; living donor kidney transplantation
donor; incentive; kidney; transplantation; unrelated
A common frustration for practicing Nephrologists is the adage that the lack of randomized controlled trials (RCTs) does not allow us to establish causality, but merely associations. The field of Nephrology, like many other disciplines, has been suffering from a lack of RCTs. The view that short of RCTs there is no reliable evidence has hampered our ability to ascertain the best course of action for our patients. However, many clinically important questions in medicine and public health such as the association of smoking and lung cancer are not amenable to RCTs due to ethical or other considerations. Whereas RCTs unquestionably hold many advantages over observational studies, it should be recognized that they also have many flaws that render them fallible under certain circumstances. We provide a description of the various pros and cons of RCTs and of observational studies using examples from the Nephrology literature, and argue that it is simplistic to rank them solely based on pre-conceived notions about the superiority of one over the other. We also discuss methods whereby observational studies can become acceptable tools for causal inferences. Such approaches are especially important in a field like Nephrology where there are myriads of potential interventions based on complex pathophysiologic states, but where properly designed and conducted RCTs for all of these will probably never materialize.
observational studies; randomized controlled trials; causal inference
Autosomal Dominant Polycystic Kidney Disease (ADPKD) and Autosomal Recessive PKD (ARPKD) are important inherited kidney diseases with distinct clinical features and genetics. While these diseases have classically been considered “adult” (ADPKD) or “infantile/pediatric” (ARPKD), it is now clear that both diseases can present in children and adults. ADPKD and ARPKD also share important pathophysiologic features, including cilia dysfunction. ADPKD is a systemic disease involving cysts in the kidneys and abdominal organs as well as abnormalities in the heart and vasculature. Although ADPKD typically presents in adults, ADPKD has been diagnosed in fetuses, infants, children and adolescents. The majority of children diagnosed with ADPKD are asymptomatic. Those with symptoms typically present with hypertension or gross hematuria. Routine screening for renal cysts in asymptomatic children who have a parent with ADPKD is generally not recommended. ARPKD is a disorder confined to the kidneys (polycystic kidneys) and liver (a developmental biliary lesion called congenital hepatic fibrosis). Although most children with ARPKD present in infancy with large, echogenic kidneys, a subset present later in childhood and even adulthood, primarily with complications related to the liver disease. As more ARPKD patients survive to adulthood, these liver complications are likely to become more prevalent.
polycystic kidney disease; pediatric; genetic; renal cysts; congenital hepatic fibrosis
Primary vesicoureteral reflux (VUR) is the commonest congenital urological abnormalities in children, which has been associated with an increased risk of urinary tract infection (UTI) and renal scarring, also called reflux nephropathy (RN). In children, RN is diagnosed mostly after UTI (acquired RN) or during follow-up for antenatally diagnosed hydronephrosis with no prior UTI (congenital RN). The acquired RN is more common in female children whereas the congenital RN is more common in male children. This observation in children might help explain the differences in the clinical presentation of RN in adults, with males presenting mostly with hypertension, proteinuria, and progressive renal failure as compared to females who present mostly with recurrent UTI and have a better outcome. Known risk factors for RN include the severity of VUR, recurrent UTI, and bladder-bowel dysfunction; younger age and delay in treatment are believed to be other risk factors. Management of VUR is controversial and includes antimicrobial prophylaxis, surgical intervention, or surveillance only. No evidence-based guidelines exist for appropriate follow-up or the management of patients with RN.
Vesicoureteral reflux; VUR; Reflux nephropathy; Prophylaxis
Human phosphate homeostasis is regulated at the level of intestinal absorption of phosphate from the diet, release of phosphate through bone resorption, and renal phosphate excretion and involves the actions of parathyroid hormone (PTH), 1,25-dihydroxy-vitamin D (1,25-(OH)2-D), and fibroblast growth factor 23 (FGF23) to maintain circulating phosphate levels within a narrow normal range, which is essential for numerous cellular functions, for the growth of tissues and for bone mineralization. Prokaryotic and single cellular eukaryotic organisms such as bacteria and yeast “sense” ambient phosphate with a multi-protein complex located in their plasma membrane, which modulates the expression of genes important for phosphate uptake and metabolism (pho pathway). Database searches based on amino acid sequence conservation alone have been unable to identify metazoan orthologs of the bacterial and yeast phosphate sensors. Thus little is known about how human and other metazoan cells sense inorganic phosphate to regulate the effects of phosphate on cell metabolism (“metabolic” sensing) or to regulate the levels of extracellular phosphate via feedback system(s) (“endocrine” sensing). Whether the “metabolic” and the “endocrine” sensor use the same or different signal transduction cascades is unknown. This chapter will review the bacterial and yeast phosphate sensors, and then discuss what is currently known about the metabolic and endocrine effects of phosphate in multicellular organisms and humans.
Phosphate is absorbed in the small intestine by at least two distinct mechanisms: paracellular phosphate transport which is dependent on passive diffusion and active transport which occurs through the sodium-dependent phosphate co-transporters. Despite evidence emerging for other ions, regulation of the phosphate specific paracellular pathways remains largely unexplored. In contrast, there is a growing body of evidence that active transport through the sodium-dependent phosphate co-transporter Npt2b is highly regulated by a diverse set of hormones and dietary conditions. Furthermore, conditional knockout of Npt2b suggests that it plays an important role in maintenance of phosphate homeostasis by coordinating intestinal phosphate absorption with renal phosphate reabsorption. The knockout mouse also suggests that Npt2b is responsible for the majority of sodium-dependent phosphate uptake. The type III sodium-dependent phosphate transporters, Pit1 and Pit2 contribute a minor role in total phosphate uptake. Despite co-expression along the apical membrane, differential responses of Pit1 and Npt2b regulation to chronic versus dietary changes illustrates another layer of phosphate transport control. Finally, a major problem in chronic kidney disease (CKD) patients is management of hyperphosphatemia. The present evidence suggests that targeting key regulatory transporters of intestinal phosphate transport may provide novel therapeutic approaches for CKD patients.
Phosphorus is an essential nutrient and is routinely assimilated through consumption of food. The body’s need of phosphate is usually fulfilled by intestinal absorption of this element from the consumed food, whereas its serum level is tightly regulated by renal excretion or reabsorption. Sodium-dependent phosphate transporters, located in the luminal side of the proximal tubular epithelial cells, have a molecular control on renal phosphate excretion and reabsorption. The systemic regulation of phosphate metabolism is a complex multiorgan process, and the identification of fibroblast growth factor-23 (FGF23)–Klotho system as a potent phosphatonin has provided new mechanistic insights into the homeostatic control of phosphate. Hypophosphatemia as a result of an increase in urinary phosphate wasting after activation of the FGF23–Klotho system is a common phenomenon, observed in both animal and human studies, whereas suppression of the FGF23–Klotho system leads to the development of hyperphosphatemia. This article will briefly summarize how delicate interactions of the FGF23–Klotho system can regulate systemic phosphate homeostasis.
Klotho; FGF23; Vitamin D; Calcium; NaPi; PTH
The CKD mineral bone disorder is a new term coined to describe the multiorgan system failure that is a major component of the excess cardiovascular mortality and morbidity complicating decreased kidney function. This syndrome embodies new discoveries of organ-to-organ communication including the skeletal hormone fibroblast growth factor-23 (FGF-23), which signals the status of skeletal mineral deposition to the kidney. The CKD mineral bone disorder begins with mild decreases in kidney function (stage 2 CKD) affecting the skeleton, as marked by increased FGF-23 secretion. At this stage, the stimulation of cardiovascular risk has begun and the increases in FGF-23 levels are strongly predictive of cardiovascular events. Later in CKD, hyperphosphatemia ensues when FGF-23 and hyperparathyroidism are no longer sufficient to maintain phosphate excretion. Hyperphosphatemia has been shown to be a direct stimulus to several cell types including vascular smooth muscle cells migrating to the neointima of atherosclerotic plaques. Phosphorus stimulates FGF-23 secretion by osteocytes and expression of the osteoblastic transcriptome, thereby increasing extracellular matrix mineralization in atherosclerotic plaques, hypertrophic cartilage, and skeletal osteoblast surfaces.In CKD, the skeleton positively contributes to hyperphosphatemia through excess bone resorption and inhibition of matrix mineralization. Thus, through the action of phosphorus, FGF-23, and other newly discovered skeletal hormones, such as osteocalcin, the skeleton plays an important role in the occurrence of cardiovascular morbidity in CKD.
Mineral bone disorder; CKD
Elevated serum phosphate has clinically been associated with vascular stiffness and cardiovascular mortality. Mechanistic studies over the past decade looking at phosphate’s local effects on the vessel wall have lent insight into various pathways that culminate in vascular calcification.Smooth muscle cell phenotype change and apoptosis play prominent roles. The sodium-phosphate cotransporter PiT-1 is required for the osteochondrogenic differentiation of smooth muscle cellsin vitro. Less is known about phosphate-driven valve interstitial cell calcification and elastin degradation.In this paper, we review the current knowledge about phosphate-induced changes in the vascular wall.
Vascular calcification; phosphate; chronic kidney disease; smooth muscle cell; elastin degradation
The diagnosis of acute kidney injury (AKI) is usually based on measurements of blood urea nitrogen (BUN) and serum creatinine. BUN and serum creatinine are not very sensitive or specific for the diagnosis of AKI because they are affected by many renal and nonrenal factors that are independent of kidney injury or kidney function. Biomarkers of AKI that are made predominantly by the injured kidney have been discovered in preclinical studies. In clinical studies of patients with AKI, some of these biomarkers (eg, interleukin-18, neutrophil gelatinase-associated lipocalin, and kidney injury molecule-1) have been shown to increase in the urine before the increase in serum creatinine. These early biomarkers of AKI are being tested in different types of AKI and in larger clinical studies. Biomarkers of AKI may also predict long-term kidney outcomes and mortality.
Biomarkers; Acute kidney injury; Interleukin-18; Neutrophil gelatinase-associated lipocalin; Kidney injury molecule-1; Cystatin C
The reference standard for making a diagnosis of hypertension among hemodialysis patients is 44-hour interdialytic ambulatory BP monitoring. However, a more practical way to diagnose and manage hypertension is to perform home BP monitoring that spans the interdialytic interval. In contrast to pre- and postdialysis BP recordings, measurements of BP made outside the dialysis unit correlate with the presence of left ventricular hypertrophy and directly and strongly with all-cause mortality. Hypervolemia that is not clinically obvious is the most common treatable cause of difficult to control hypertension; volume control should be the initial therapy to treat hypertension in most hemodialysis patients. To diagnose hypervolemia, continuous blood volume monitoring is emerging as an effective and simple technique. Reducing dietary and dialysate sodium is an often overlooked strategy to improve BP control. Although definitive randomized trials that demonstrate cardiovascular benefits of BP lowering among hypertensive hemodialysis have not been performed, emerging evidence suggests that lowering BP may reduce cardiovascular events. Since predialysis and post-dialysis BP are quite variable and agree poorly with measurements obtained outside the dialysis unit, treatment should be guided by BP obtained outside the dialysis unit. While the appropriate level to which BP should be lowered remains elusive, current data suggests that interdialytic ambulatory systolic BP should be lowered to <130 mmHg and averaged home systolic BP to <140 mmHg. Antihypertensive drugs will be required by most patients receiving 4 hour thrice weekly dialysis. Beta blockers, dihydropyridine calcium blockers and agents that block the renin-angiotensin system appear to be effective in lowering BP in these patients.
Hypertension; diagnosis; hemodialysis; home BP monitoring; ambulatory BP monitoring; treatment; pathophysiology
CKD is a major public health problem in the developed and the developing world. The degree of proteinuria associated with renal failure is a generally well accepted marker of disease severity. Agents with direct antiproteinuric effects are highly desirable therapeutic strategies for slowing, or even halting, progressive loss of kidney function. We review progress on therapies acting further downstream of the renin–angiotensin–aldosterone system pathway (e.g., transforming growth factor-beta antagonism, endothelin antagonism) and on those acting independent of the renin–angiotensin–aldosterone system pathway. In all, we discuss 26 therapeutic targets or compounds and 2 lifestyle changes (dietary modification and weight loss) that have been used clinically for diabetic or nondiabetic kidney disease. These therapies include endogenous molecules (estrogens, isotretinoin), biologic antagonists (monoclonal antibodies, soluble receptors), and small molecules. Where mechanistic data are available, these therapies have been shown to exert favorable effects on glomerular cell phenotype. In some cases, recent work has indicated surprising new molecular pathways for some therapies, such as direct effects on the podocyte by glucocorticoids, rituximab, and erythropoietin. It is hoped that recent advances in the basic science of kidney injury will prompt development of more effective pharmaceutical and biologic therapies for proteinuria.
Proteinuria; Albuminuria; Podocyte; Glomerulus; Diabetes; Novel therapies