Boyd Orr and food security
Leading up to the formation of the Nutrition Society the issue of concern to John Boyd Orr during the 1920s, 30s and 40s was food security, a concern that has re-emerged today. At first this interest was at a national level, but this led on to interests in food security at an international level and was the basis for his winning the Nobel Prize for Peace in 1949.
During the 1930s, Boyd Orr published his book ‘Food, Health and Income’, which became a major embarrassment to the UK Government at the time as it showed, in contrast to their belief, that many people in the UK were not being adequately nourished due to poverty and this was having an adverse effect on their health. This drove Boyd Orr to realise that good nutrition was intimately linked to an understanding of the food chain.
The reason for making this point is to highlight that food security is not a new issue. The political context may have changed, but the fundamental issue is how to understand the inter-linkages between food supply through agriculture and the consequences on nutrition and health. It is these interrelationships from plough through practice to policy, which are interwoven into nutrition research over the past 70 years.
Golden era of animal nutrition
In the post-war period, UK nutritionists turned their attention towards animal nutrition. This was a direct response to the food insecurity that had been exposed during the Second World War, when food rationing had been required. As a result, the landscape of livestock production from the 1960s onwards was transformed. At the vanguard of this revolution in animal nutrition was Kenneth Blaxter, a former President of the Nutrition Society and the third Director of the Rowett Institute in Aberdeen. Blaxter and his colleagues re-assessed the relationship between diet, energy and growth in both ruminant and non-ruminant animals.
In ruminants one of the key advances which made such progress possible was the discovery that volatile fatty acids are the end product of microbial fermentation( Reference Barcroft, McAnally and Phillipson 1 ). This discovery was made in 1944 by another ex-President of the Nutrition Society, Joseph Barcroft and his colleagues at Cambridge. Their findings overturned the longstanding view that ruminants, like human subjects, used glucose as the primary energy source for growth( Reference Blaxter 2 ). Instead it drew attention to the importance of the ruminant microbial ecosytem for the conversion of plant polysaccharides into volatile fatty acids for ruminant nutrition. It became apparent that the nature of the diet influenced the composition and activity of the ruminal microflora, which had a major effect on the ratio of volatile fatty acids produced and this in turn determined the growth and body composition of the animal( Reference Blaxter 2 ).
This new knowledge provided the impetus for Blaxter to revisit the assumptions made about the energy needs for the growth of ruminants and to consider how these are met by diet. Based on extensive calorimetry work undertaken, while at the Hannah Institute( Reference Blaxter 3 ) and subsequently at the Rowett Institute, Blaxter examined in meticulous detail how energy from different diets is partitioned into its various components( Reference Blaxter 2 , Reference Blaxter 4 ). From these studies, the concept of metabolisable energy was developed, where metabolisable energy is the proportion of energy obtained in the diet available for conversion into productive tissue (i.e. muscle and milk), relative to the energy lost as waste in the forms of methane, urine and heat. This partitioning between energy useable for growth, and that being lost as waste, was shown to be strongly diet dependent( Reference Blaxter 2 , Reference Blaxter and Wainman 5 ). For example, relative to sheep or cattle fed roughage-based diets, those fed cereal (maize)-based diets lose less energy as waste in the faeces or as methane and as a consequence a greater proportion of energy is left available as metabolisable energy for growth on cereal-based diets( Reference Blaxter and Wainman 5 ).
The concept of metabolisable energy provided the theoretical and practical framework, which allowed farmers to select diets that would maximise the productive growth of their livestock. This system was adopted by the Agricultural Research Council( 6 , 7 ) and was extensively used by farmers across the UK. As a result, it became possible to shorten the time from birth to slaughter for cattle from 2–3 years to as little as 11 months, which had obvious economic benefits to the farmer( Reference Blaxter 2 ). In 1960, 75% of beef cattle were farmed using the slower traditional route, whereas by the 1970s, as few as 30% adopted this approach, since most farmers opted for the faster growth-promoting cereal-based diets. So this is a clear example of how nutrition research of livestock animals had a profound impact on agricultural practice.
Such advances were not without their problems. Farmers adopted the cereal-based barley–beef system, developed at the Rowett Institute, across the UK because of the economic benefits to production( Reference Preston, Aitken and Whitelaw 8 ). Nevertheless, it was discovered that one of the downsides of the barley–beef diet were lesions to the rumen wall of cattle as well as abscesses in the liver( Reference Fell, Boyne and Kay 9 ). It was believed that this resulted from the low pH in the rumen fluid, which increased the blood flow through the rumen wall causing hypertrophy of the rumen musosa. Rumenitis then resulted, triggered by ingested hairs taken in by animals licking their coats, which caused penetrating lesions in the muscosal wall( Reference Fell, Kay and Walker 10 ). Fortunately, these problems could be avoided by inclusion of at least some roughage in the diet, thereby allowing the production benefits to be achieved without compromising the health and welfare of the animal.
Today different pressures drive ruminant livestock nutrition. Given the mass production of meat by countries such as the USA, as well as the increasing demand for eating meat in emerging nations like China, one of the big issues facing animal nutritionists is how to reduce methane production associated with ruminant production( Reference Thornton 11 ) and another is how to improve the healthiness of the primary product, by improving the fatty acid profiles of meat and milk( Reference Givens 12 ). These are not necessarily mutually exclusive goals, as both depend on further tuning of nutrient and energy partitioning by the microbiological ecosystem of the rumen. So while the problems may be different, the solutions are likely to build upon the knowledge and understanding of ruminant nutrition and microbiology developed in the 1950s and 60s.
From epidemiology to personalised nutrition
The end of the Second World War was the trigger for another major development in nutrition, but this time in human subjects. Up to that time, the main focus of nutrition research had been on nutritional deficiencies, particularly those caused by a lack of micronutrients and vitamins( Reference Carpenter 13 , Reference Carpenter 14 ), but after the Second World War the focus shifted to the emerging problems of over nutrition. This watershed in nutrition research marks the start of the rise in epidemiology and takes us on a journey towards personalised nutrition.
The start of this new era was recognition of the increasing prevalence of heart disease, particularly among business executives and professionals in the USA. While heart disease had been clinically diagnosed by the 1920s, the link to diet had not been made. The decline in the incidence of CVD in Norway, associated with periods of food restriction during the Second World War( Reference Strom and Jensen 15 ), is often cited as the catalyst that prompted thinking about the potential links between diet and heart disease. Whether this is the case or not, the first prospective cohort studies on the effects of diet and other factors on the development of heart disease were set up in the late 1940s. Among these were the Minnesota Business Man's study( Reference Keys, Taylor and Blackburn 16 ) and the Framingham Heart Study in Massachusetts( Reference Mendis 17 ). From these studies, the diet-heart hypothesis emerged, which suggested that dietary cholesterol was a potential risk factor in heart disease.
However, it was the seven countries' ecological study established by Ancel Keys, which probably did most to cement the relationship between dietary fat intake (primarily from animal products) and the risk of CVD by showing how the incidence of heart disease increased in direct proportion to the amount of saturated fat in the diet( Reference Keys 18 ). This study is also credited as the first to recognise the health benefits of a Mediterranean diet. It should be noted that at the time, the study was criticised because the countries involved in the study were not randomly selected. Despite this, the Seven Countries Study has had a long-lasting impact on the direction of nutrition research ever since. For example, it provided the impetus and basis for all subsequent work on lipid metabolism undertaken in both human subjects and animal models, which clarified the role of LDL and HDL cholesterol and other dietary fats in relation to heart disease risk as well as laying the foundations for public health nutrition policy for heart disease.
Prior to the Minnesota Business Man's the Framingham Heart and the Key's Seven Countries studies, epidemiological approaches had not been used in nutrition research. Hence, these studies represent the birth of nutritional epidemiology, an approach that is widely used today to inform issues of policy relating to nutrition.
One of the key findings to emerge from the Key's Seven Countries Study was the North–South gradient in prevalence of CVD with North Finland exhibiting high levels relative to Mediterranean countries such as Italy and Greece( Reference Keys 18 ). As a result of this, one of the most successful public health nutrition intervention trials was initiated with the main emphasis being on changing diets and reducing smoking through local community-based initiatives.
This community-based intervention was initiated in North Karelia and was remarkably successful as it resulted in a progressive decline in the prevalence in CVD of 85% between 1972 and 2005( Reference Pietinen, Vartiainen and Seppanen 19 , Reference Puska 20 ). Given the success of the intervention in its first 5 years it was extended to the whole of Finland with equally spectacular results. Of the factors contributing to the substantial decline in CVD, reduction in saturated fats and cholesterol were singled out as the most important factors( Reference Pietinen, Vartiainen and Seppanen 19 ). The data from this public health intervention trial provide compelling support for the diet-heart hypothesis.
Nevertheless, the simplicity of the diet-heart hypothesis, in terms of the contribution of saturated fat to heart disease, has recently come into question. A meta-analysis of prospective cohort studies, involving the follow-up of 350 000 subjects after 5–23 years, 11 000 of which developed CHD or stroke, has shown that intake of saturated fat is not associated with an increased risk of either CHD or stroke( Reference Siri-Tarino, Sun and Hu 21 ).
Similarly, another recent analysis of eleven cohorts studies involving about 350 000 people, after 4–10 years of follow-up, has shown that replacing SFA with carbohydrates does not prevent CHD over a wide range of intakes, whereas replacement with PUFA appears to be protective( Reference Jakobsen, O'Reilly and Heitmann 22 ).
These studies raise some notable points:
-
(a) The success of the North Karelia community intervention in lowering the prevalence of heart disease was probably due to the whole diet-based approach used. It did not rely solely on reducing SFA.
-
(b) Focusing on only one nutrient to bring about health benefits may not bring about the expected health outcome. Hence, SFA may well be a villain with respect to heart disease, but low-fat foods where the fats have been replaced by refined carbohydrate may not provide a benefit.
-
(c) While dietary advice in the UK is broadly consistent with the message that SFA should be reduced in the diet and replaced with other types of fat, many people would be forgiven for misinterpreting this message for simply needing to reduce SFA intake. Hence, from a policy perspective, there is a challenge as to how to convey nutritional messages as they are refined or become more complex.
The difficulty of translating what is learnt from research into policy is likely to get worse in the next few years. For example, there are many biological intermediary mechanisms other than changes in LDL and HDL cholesterol, which affect the risk of heart disease( Reference Hu and Willett 23 ). These include increased blood pressure, cardiac arrhythmia, systemic inflammation and insulin insensitivity. This only serves to make a holistic approach to dietary advice based on dietary patterns, rather than on single nutrients, all the more important( Reference Hu and Willett 23 , Reference Bhupathiraju and Tucker 24 ).
However, the biggest challenge to policy advice on nutrition is yet to be faced, when inter-individual variation in response to diet, resulting from an improved understanding of the contribution of an individual's genotype (i.e. personalised nutrition), is taken into account. It is well recognised that individual metabolic and physiological responses to diet are far from uniform. For example, several years ago, Schaefer et al. showed that for a group of human subjects, who were transferred from a high-fat diet to a low-fat diet under isoenergetic conditions, the effect of this dietary shift on blood lipids, such as LDL cholesterol, HDL cholesterol and TAG, was highly variable( Reference Schaefer, Lamon-Fava and Ausman 25 ). If public health policy on nutrition is to advance and improve then the contribution of individual genotype and phenotype to dietary response will need to be taken into account. This inevitably means that policy advice on nutrition will have to become more stratified and complex.
Given the revolution in genome biology, which has occurred over the past few years, the rate of discovery of gene polymorphisms and their contribution to individual response to nutrition is going to accelerate rapidly. This is because of the phenomenal pace of change in DNA-sequencing technology, which has dramatically increased the speed and reduced the costs of sequencing relative to when the first draft human genome sequence was produced 10 years ago( Reference Mardis 26 ). It is now possible to sequence the human genome in almost 3 weeks, whereas it took over 10 years for the first draft human genome sequence to be elucidated. Today new sequencing technology is being developed, which promises to reduce the cost and increase the speed of DNA sequencing still further( Reference Rothberg, Hinz and Rearick 27 ). It is therefore not unreasonable to predict, given the further advances anticipated in genome-sequencing technology, that the prospect of an individual being given their own genome sequence at birth is not an unrealistic future prospect. Such information would then be used to guide an individual's nutritional and therapeutic course through life.
This is not all for the future, as a recent meta-analysis of genome-wide association studies using greater than 100 000 individuals identified ninety-five gene polymorphisms in the human genome that influence blood lipid levels( Reference Teslovich, Musunuru and Smith 28 ). Fifty-nine of these are entirely new and previously unknown to influence lipid metabolism( Reference Teslovich, Musunuru and Smith 28 ).
Hence, in a few more years, an incredibly rich and complex dataset will be available to support policies in public health nutrition and this will be light years away from what Ancel Keys understood in the 1950s. However, the big question and challenge for nutrition research will be whether all this new information can be made accessible and relevant to public health policy, so that we can progress towards more tailored rather than the generic advice of ‘eat a healthy and balanced diet’. This is an important challenge for nutritionists, which if we fail is likely to reduce public interest in diet and health.
Obesity: a problem of nutrition?
The other big nutrition research challenge that has emerged during the past 70 years, which also relates to over nutrition, is obesity. Obesity has dominated the nutritional research agenda, as well as the headlines, for the past 20–25 years, since the big change in obesity levels started in the late 1970s and early 1980s( 29 ). It has also provided an understanding of the relationship between obesity and metabolic diseases( Reference Frayn 30 ), but what we have not been able to crack is how to prevent or reverse the obesity trend.
In evolutionary terms, the rise in obesity is relatively recent, and this points to a change in our environment rather than a change in our genetics to account for the increasing prevalence of obesity. Nonetheless, it is important not to dismiss genetics as irrelevant as there is ample evidence, which shows the major contribution genetics plays in determining an individual's susceptibility to obesity. A good example of this comes from a Scandinavian study of identical and non-identical twins, where it was shown that the correlation in body composition is much stronger between monozygotic than dizygotic siblings( Reference Borjeson 31 ). More recently, it has been estimated that genetics contributes between 40 and 70% to an individual's risk of being obese( Reference Loos 32 ).
Nevertheless, the genetics of common obesity is complex. Genome-wide association studies have shown that common obesity is a polygenic trait and is made up of many potential gene polymorphisms, each of which contributes only a small fraction to the overall susceptibility to obesity. In aggregate, the more risk alleles that an individual has, the greater their risk of obesity( Reference Speliotes, Willer and Berndt 33 ). It may seem surprising, but it is this strong genetic component to obesity that offers nutrition research its best opportunity to help tackle the obesity problem.
Food is now produced more cheaply than ever before. In a small part, this has happened as a consequence of the success of research into animal nutrition undertaken during the 1950–70s, which not only helped improve animal production but also helped the food industry produce a wider variety of products. One of the consequences of this cheaper food has been a rise in its energy density, which, it has been argued, may have provided a socio-economic driver for the obesity problem( Reference Drewnowski and Darmon 34 ). This argument lends itself to the view that obesity is not really a nutritional issue; it is rather a problem of environment, behaviour and choice. We certainly have much more choice today than ever before, as reflected by the rise in the number of food items available to us on supermarket shelves. Hence, the increased availability of more and cheaper foods has undoubtedly changed our food environment.
Others have argued that it is the lack of physical activity or increased levels of sedentariness which are the major contributing factors to our rise in obesity. Evidence in support of this view has come from Ministry of Agriculture, Fisheries and Food National Food Survey data, which shows that estimates of energy and fat intake from the 1950s up to 1990 have remained relatively stable and may even have declined over this period during which obesity has risen( Reference Prentice and Jebb 35 ). In contrast, indices of sedentariness and reduced physical activity, such as hours spent viewing the TV and car ownership, have increased( Reference Prentice and Jebb 35 ). This evidence paints a picture of reduced energy expenditure in the face of relatively unchanged energy intake( Reference Prentice and Jebb 35 ). However, others have suggested that while sedentariness associated with TV viewing results in greater body fatness, this is more likely to be due to greater food intake than reduced energy expenditure( Reference Jackson, Djafarian and Stewart 36 ).
Given that it is disproportionately easier to overeat energies than expend them through exercise then it seems difficult to discount overconsumption as a significant driving force behind obesity. Indeed, data from the National Health and Nutrition Examination Survey studies in the USA, which show that energy density and food intake both increased in parallel with the rise in obesity, support this view( Reference Kant and Graubard 37 ). This makes the UK Ministry of Agriculture, Fisheries and Food data on energy and fat intake between 1950 and 1990 somewhat surprising, and suggests that it should be viewed with caution.
So what are the options for tackling obesity and does nutrition research have any role to play? There is currently much emphasis being given to altering people's behaviour by understanding the decisions behind their food choices. This is currently an area of high policy interest, but the question remains whether it will be effective. Given the strong contribution that genetics plays in our susceptibility to obesity, then those who have strong behavioural drives to over consume will have genotypes that explain their behaviour. For such individuals neither mild nor weak behavioural interventions are likely to influence such hard-wired drives to eat.
Of the available approaches to limit food intake, bariatric surgery is currently the most effective means available and it has been shown to lead to sustained weight loss, as well as to modulate some of the side effects of obesity, but it is expensive and difficult to roll out on a population level. Another alternative is appetite-suppressing drugs, but to date these have been found either to have low efficacy or unacceptable side effects. Hence, this leaves open the question of whether there are any nutritional approaches that could be used.
It has been known for some time that macronutrients can exert differential effects on satiety. In the light of this a number of studies have revisited the potential of protein-enriched diets to ameliorate the hunger drive and reduce ad libitum food intake. Weigle et al. demonstrated that increasing protein content of isoenergetic diets from 15 to 30% reduced hunger in subjects given the diets ad libitum ( Reference Weigle, Breen and Matthys 38 ). Similarly, another study by Johnstone et al. showed reductions in hunger when subjects were given ad libitum diets containing 30% protein( Reference Johnstone, Horgan and Murison 39 ). They also showed that this effect of diet was not dependent upon low carbohydrate content( Reference Johnstone, Horgan and Murison 39 ).
More recently, the EU Framework 6 consortium DIOGENES reported results from a large-scale intervention trial across multiple sites using high-protein diets. It found that over a 26-week period a high-protein–low-glycaemic index diet was the most effective diet for achieving weight maintenance( Reference Larsen, Dalskov and van Baak 40 ). Hence, despite the bad press that high-protein diets have had there is accumulating evidence that these diets are effective.
Although it is early days, one way that high-protein diets may work is by acting through physiological control systems within the gut. Recent work in mice suggests that high-protein diets may act via the gut hormone PYY that provides a negative feedback signal to brain to stop eating by acting on central appetite circuits( Reference Batterham, Heffron and Kapoor 41 ).
These findings together with other recent work showing nutrient receptors, which can detect amino acids, fats and carbohydrate, are expressed in the gut( Reference Wellendorph, Johansen and Brauner-Osborne 42 ), and suggest that there is still much we need to understand about how nutrient signals are detected, particularly in relation to the control of hunger and satiety. Hence, for nutritionists, it would seem that overcoming the drive to eat may be a potentially productive avenue for the future, particularly as many food companies are keen to develop products that are more satiating.
A balance of risks
Of course one of the problems of the high-protein diets has been the concern that they are associated with health problems. For example, meat or more particularly processed meats have been associated with increased risk of colorectal cancer( 43 ). While this is an important concern, recent work suggests that the inclusion of some forms of carbohydrate, such as non-resistant starch polysaccharide and fibre in the diet may help to ameliorate some of the potential negative effects of protein-enriched diets( Reference Russell, Gratz and Duncan 44 ).
At the same time, it is important to bring the balance of risks into perspective. The risk of getting colorectal cancer is increased 1·2-fold for every 50 g increase in red meat and/or processed meat consumption according to the World Cancer Research Fund report( 43 ), whereas the risk of getting diabetes from being obese can be as high as 40-fold( Reference Klein, Romijn, Reed, Kronenberg, Melmed and Polonsky 45 ). Similarly, neither bariatric surgery nor the use of therapeutics is risk free. Obesity itself is also a risk factor for colorectal cancer( 43 ). Hence, it would seem that being obese or overweight might provide greater risk of ill health than consumption of high-protein diets. For this reason, we need to understand how high-protein diets work and to develop ways in which to reduce or eliminate any unwanted side effects. On this basis, we will gain a better understanding of the balance of risks associated with high-protein diets and being obese and this will lend itself to a more integrated and holistic approach to risk management.
From fetal programming to food security
The fourth issue that has made a major impact on the direction of nutrition research is the study of early-life nutrition, which has come to prominence relatively recently. Until a couple of decades ago, a common view was that the pregnant mother acted as a buffer to the outside world for her fetus and that the fetus was thereby protected from environmental and nutritional stresses.
This view changed back in 1991 when Hales and Barker reported that the body weight of babies at birth had a direct and inverse correlation with the risk of developing impaired glucose tolerance later in life as well as the risk of developing metabolic syndrome( Reference Hales and Barker 46 ). Subsequent work by Barker and co-workers showed a similar relationship held for low body weight at birth and the risk of developing CVD( Reference Osmond, Barker and Winter 47 ). Studies from across the world have replicated these findings( Reference Godfrey, Gluckman and Hanson 48 ). The relationship between low birth weight and poor health outcome of the offspring led Hales and Barker to postulate the thrifty phenotype hypothesis( Reference Hales and Barker 46 ), which in essence suggests that fetal malnutrition sets in train a series of physiological and/or metabolic adaptations that should maximise the chances of survival of the offspring when born into conditions of poor postnatal nutrition.
More recently, Gluckman et al. have extended this hypothesis to what they have called the predictive adaptive hypothesis that tries to provide a more flexible and evolutionary framework to explain fetal programming, even where there may be phenotypic changes without overt changes in fetal or birth weight( Reference Gluckman, Hanson and Beedle 49 ). The key aspect of both the thrifty phenotype and predictive adaptive hypotheses is the mismatch between the nutritional environment of the fetus in utero and that of the offspring post-natally, which provides the basis of poor lifelong health outcome. Hence, a fetus carried by an undernourished mother, but born into a nutritionally enriched environment, is considered mismatched and therefore at risk of disease later in life, whereas those offspring that remain matched to their in utero experience have reduced risk of disease later in life( Reference Gluckman, Hanson and Beedle 49 ).
This mismatch concept has clear importance to all of us, but at present it has potentially greatest importance in those countries undergoing economic transition in the developing world. As such countries become wealthier, there is increasing demand for more energy-rich westernised diets, a process that has been called nutritional transition( Reference Popkin 50 ). According to UNICEF, it is estimated that 19 million newborns in the developing world are low-birth-weight babies of less than 2·5 kg( 51 ). Of these 7·4 million are born in India( 51 ), which is considered an emerging nation and undergoing nutritional transition. India is also experiencing epidemics in diabetes and CVD. Between 1970 and 2000, type 2 diabetes increased from less than 3% to more than 12% of urban Indian adults( Reference Ramachandran, Snehalatha and Kapur 52 ) and currently India has the highest prevalence of diabetes in the world( Reference King, Aubert and Herman 53 ). There has been a similar rise in CVD among Indian adults and genetics must be playing a major role in the susceptibility to these diseases, as discussed earlier. Nonetheless, there is also likely to be a strong component arising from the in utero experience and being born of the low birth weight. For those children born of undernourished mothers, yet go on to enjoy a more westernised lifestyle, this will only exacerbate public health nutrition problems in emerging nations such as India. Hence, both in terms of food security and public health policy, it is clearly important to understand the contribution that early-life nutrition plays in terms of disease risk and to identify how any risks can be prevented and adverse effects reversed.
To achieve this requires an understanding of the mechanistic basis of nutritional programming and to date such studies have been heavily reliant on animal models. Usually one of four nutritional stresses is used to mimic the potential human situation and these include under nutrition, low-protein diets, Fe-deficiency and high-fat diets( Reference Warner and Ozanne 54 ). While there are examples of common responses to these different types of nutritional stresses, such as raised blood pressure in response to low-protein, Fe-deficient and high-fat diets, it is more often the case that the physiological and metabolic outcomes vary according to the diets and models used. This has made finding common underlying mechanisms somewhat challenging.
Nevertheless, there is a common view that the mechanisms involved in programming are likely to involve epigenetics, a phenomenon involving altered gene expression, transmissible across generations, but without changes to the genetic code. There are three main epigenetic mechanisms, methylation of cytosine bases of DNA, methylation and acetylation of histone and chromatin proteins or interfering RNA (or microRNA)( Reference McKay and Mathers 55 ). Of these DNA, methylation is perhaps the best explored and easiest to understand the potential influence of nutrition at the present time, as protein restriction, folate deficiency, amino restriction and choline deficiencies can all influence the generation of S-adenosyl methionine by the folate/methylation cycles, which in turn affects DNA methylation and gene expression( Reference Warner and Ozanne 54 , Reference McKay and Mathers 55 ). There is also evidence that nutrition can reverse such effects of dietary stress on DNA methylation. For example, folate supplementation has been shown to reverse the effects of a low-protein diet on the methylation status of selected genes and their expression in the liver of rats( Reference Lillycrop, Phillips and Jackson 56 ).
While it is encouraging that we are starting to understand some of the potential mechanisms involved in programming, we are still a long way from understanding the relationship between diet, epigenetics and disease. For example, it is evident that DNA methylation is a highly specific and targeted event, yet still we have no idea as to how generic dietary stresses translate into targeted cell-, tissue- and gene-specific effects.
Hence, it is clear that while fetal programming and epigenetics are exciting and relatively recent developments in nutrition research, we still have a long way to go towards understanding the cause and effect. Once again the challenge to future nutritionists will be to make sense of all these complexities so that they become useful to public health and do not simply remain of academic interest.
Perspective
Nutrition research has seen some dramatic changes in the last 70 years.
Research into animal nutrition during the post-war decades transformed the nature of animal production and contributed to our improved food security. Today's animal nutritionists are still focused on food security issues, but the aim now is to improve the healthiness of animal products, while doing this on a lower carbon footprint and increased sustainability.
Human nutrition moved away from problems of deficiency to relationships between diet and complex diseases, explored through epidemiology, and this laid the foundation of today's public health nutrition policy and advice.
The recent advances in genomics could take nutritional advice to a new level, by accounting for individual variation in response to diet. However, this combined with the need to consider nutrition on a holistic rather than single-nutrient basis will give new challenges in terms of developing simple and understandable policy and advice.
Obesity is still very much the problem of the moment, and in part is born of the advances and success of earlier animal nutrition research, but nutritionists must take a proactive role in finding solutions and should not leave the problem solely to the realms of social science.
Finally, as we attempt to resolve the problems of food security, work of nutrition researchers will be vital in preventing the rise in complex diseases in countries undergoing nutritional transition.
So far from the demise of the subject some saw in the 1940s, it is clear that nutrition has evolved into a subject that spans molecular biology through public health and as a result continues to make a contribution to science and society, which truly ranges from plough through practice to policy.
Acknowledgements
The author would like to acknowledge the funding support of the Scottish Government, Rural and Environmental Science and Analytical Services. The author has no conflicts of interest.