Stephen Wooding (U. of Utah) is elated. He sees an “exciting trend” in genetic research that might, finally, demonstrate positive natural selection acting on a gene with a clear phenotypic effect (measurable outward benefit). Writing in the Sept. 7 Current Biology,1 he mentions a few recent papers suggesting this connection, but focuses particularly on one study by Rockman et al. in the same issue.2 This UK/American team claims to have identified a gene that has been positively selected to shape heart disease risk among Europeans. The story was summarized by EurekAlert. The gene under investigation is named MMP3, a regulator of a substance that builds coronary artery walls. The amount of up- or down-regulation of this gene affects their elasticity and thickness. The researchers compared this gene and its surrounding DNA between nine kinds of monkeys and apes, and between six human populations. They claim to have found a trend among Europeans to possess a certain mutation that up-regulates the products of MMP3 (because it inhibits repressive factors). This leads to less hardening of the arteries but more risk of blood clot induced heart attack or stroke (myocardial infarction). The mutation changes one T to a C at a certain position on the gene. Using molecular phylogenetic techniques, they estimated the mutation might have occurred in the European line anywhere from 36,600 to 2,200 years ago. Maybe it came about in the Ice Age, they surmise, and natural selection acting on this mutation may have given Europeans dining on animal fat some protection from atherosclerosis. Whatever, the selection probably did not act alone on that one gene, which only regulates other genes, but on a suite of genes due to pleiotropic effects (i.e., when one gene evolves, other unrelated phenotypic effects can result). The authors seemed happy to be able to provide an example of natural selection acting positively on a gene for a beneficial physiological effect: “The evolutionary forces of mutation, natural selection, and genetic drift shape the pattern of phenotypic variation in nature, but the roles of these forces in defining the distributions of particular traits have been hard to disentangle.” (Emphasis added in all quotes.) Natural selection is an important factor influencing variation in the human genome, but most genetic studies of natural selection have focused on variants with unknown phenotypic associations. This trend is changing. New studies are rapidly revealing the effects of natural selection on genetic variants of known or likely functional importance….These [studies on] variants [on genes with known phenotypic effects] are particularly interesting from an evolutionary standpoint because they are where the phenotypic rubber meets the road of natural selection – variants upon which natural selection could be having particularly direct effects.Those assuming this was old news since Darwin’s day might be surprised at this admission that studies have rarely connected a mutation to an actual physical benefit. Analyses at the molecular level of the gene, to be fair, have only recently become possible. Stephen Wooding is greatly encouraged by this study. He thinks it represents not only an exciting trend, but a new means of paving “an unusually direct path between ancient human history and modern human health.” Rockman’s team claims that British men would have 43% more heart attacks had this mutation not occurred among their distant ancestors. But then, since hardening of the arteries seems to be a recent malady among humans, he admitted that maybe the natural selection at the time was for something else “and the heart disease effect was incidental.” One other benefit Rockman claims for this study is that it shows natural selection can act not only on the genes the make proteins, but on the genes that regulate other genes– a factor he claims “traditional evolutionary biology has all but ignored.” Considering the evolution of regulatory factors extends natural selection theory to the level of the “wiring diagram,” he says. No longer should we just consider good genes and bad genes. “Rather, there is a complex set of interactions” such that certain combinations might be best in one environment, others better in another. “So we’re advocating a more nuanced view of how we view the genetic bases of disease,” he said in the press release from Duke University.1Stephen Wooding, “Natural Selection: Sign, Sign, Everywhere a Sign,” Current Biology, Volume 14, Issue 17, 7 September 2004, Pages R700-R701, doi:10.1016/j.cub.2004.08.041.2Rockman et al., “Positive Selection on MMP3 Regulation Has Shaped Heart Disease Risk,” Current Biology, Volume 14, Issue 17, 7 September 2004, Pages 1531-1539, doi:10.1016/j.cub.2004.08.051.Remember the old moron jokes? “How do you keep a moron busy for an hour? Put him in a round room and tell him there’s a penny in the corner.” It doesn’t take much to amuse Darwinists. Tell them there’s a hint of natural selection in the human genome, and it is incredible the amount of work they will do to find it. You can bet any claims will be ambiguous, hazy, uncertain, questionable and open to different interpretations, but if they can be offered in homage to buddha Charlie, it’s worth it to them to run in logical circles and keep up the candles of hope burning. (For another example, look at this story on EurekAlert, about Penn State scientists “hunting illusive signs of natural selection” between Europeans and Africans, and finding only ambiguous signs of differing susceptibility to disease or milk intolerance.) What did these guys find, really? One single-nucleotide polymorphism in just one gene out of hundreds that regulate heart health. Sure, tweaking the regulation of this gene might put a person at risk for hardening of the arteries, but is Darwinian evolution the only explanation? The Europeans could have descended from a clan whose grandpappy had the mutation at the Tower of Babel, for that matter; how could they prove otherwise? The monkeys they studied had very different polymorphisms of these genes, and you don’t see them all keeling over from heart attacks. If natural selection acted on this gene, why didn’t it act on Siberians or Eskimos or Australians or others at similar latitudes? Did this mutation lead to a new organ or function or add to the genetic information? No, it only tweaked the existing information. And some evolution! Pick your poison: increased risk of atherosclerosis, or increased risk of myocardial infarction. Is this one of the finest examples they can find of the miracle-working mechanism of natural selection, the discovery that made Chairman Charlie famous, so powerful that during the same period of time it turned monkeys swinging from trees into humans writing books? The line about Ice Age men benefiting from the mutation because of their mammal-fat diet is comical. How could that help the population genetics, if the individuals most likely got their heart attacks after having children? The error bars on their dates are huge, even if one were to swallow the highly questionable phylogenetic techniques they used, and the evolution-based assumptions about mutation rates. A chain of reasoning is only as strong as its weakest link: e.g., “if there was water on Mars, there might have been life, therefore there might have been intelligent life, therefore there might have been lawyers.” Evolutionists get away with stacked assumptions only because they have ruled out anything other than naturalistic explanations. Since the only contender is something akin to Darwinism, it’s the best they can offer (see Best-in-Field Fallacy). Why are we the only ones questioning the Darwinist spin on this paper, and asking the hard questions while the other science outlets mindlessly inherit the wind and parrot the spin with lines like “Heart gene yields insights into evolution”? Why not consider the obvious, that a functioning circulatory system is a tremendous example of interrelated, functional design? The diagnosis is simple. It is that ancient human malady, hardness of heart.(Visited 133 times, 1 visits today)FacebookTwitterPinterestSave分享0
19 June 2015Afro pessimists step aside, because the big leagues are bringing their green bucks to Africa.That’s the message from TPG Growth and Satya Capital, which announced a billion- dollar investment partnership in Africa on 18 June.The partnership is TPG’s first African-focused investment vehicle. It will invest in growth stage companies and the next generation of entrepreneurs across the continent.Satya managing partner Moez Daya said the partners were looking for entrepreneurial partnerships “north of the Limpopo River” and which could benefit South African businesses.“The growth of the consumer face middle class is greater in sub-Saharan Africa,” he said. “South African growth is a lot more constrained. Growth beyond South Africa is faster, but more risky.“The business we do is partnering great companies, those who are breaking through the [Limpopo] border. We will be the ideal partners for South African companies wanting to do that. We bring capital, tools, know-how and the relationships that we’ve built across Africa.”Big investmentSatya Capital was started by Sudanese-British billionaire Mo Ibrahim, who sold his African-based mobile communication company Celtel for $3.4-billion (about R42- billion) in 2005.TPG, the global private investment firm, has $74-billion of assets under its management, while TPG Growth, its middle market and growth equity investment platform, has $7-billion. The latter’s current and past investments represent a mix of disruptive and innovative companies across tech, retail and entertainment including Uber, Airbnb, Box, Domo, Beautycounter, Ride, Angie’s Artisan Treats, Fender, SurveyMonkey, Evolution Media and STX Entertainment, among others.The partnership is TPG’s first African-focused investment vehicle. It will invest in growth stage companies and the next generation of entrepreneurs across the continent.The partnership with Satya Capital, an independent investment firm focused on Africa, brought deep regional expertise, relationships and on-the-ground experience, said Daya.Growth focusThe money will be provided by TPG Growth, which will look for companies and entrepreneurs in all sectors that are in need of capital to help them grow, including in health care, technology, media and telecommunications, consumer and financial services.While Satya normally targeted investments of between $20-million and $150- million, this partnership would allow it to broaden the scope to between $1-million and $200-million, said Daya.“It is exciting for African entrepreneurs looking for investors,” he said. “TPG is willing to invest up to $1-billion in Africa provided it is the right company. They are not bound by geography, but go where the opportunity is. However, this partnership allows them to focus more on the growth in Africa.”Daya, the former chief executive of Celtel International, said that business was a “landmark opportunity and company” for Satya Capital. “We’ve [Satya] been operating for seven years in Africa and built a portfolio of companies.”Satya Capital, which has capitalised $300-million in Africa, focused on an evergreen model. “That means we keep recycling the money within a company instead of withdrawing our investment,” he explained. “We invest in companies without a strict limit on when to leave, which gives businesses the ability to realise their full potential and brings stability,” he said, adding that the partnership with TPG was not part of his group’s evergreen strategy.Africa riskDaya said the risk of doing business in Africa had micro and macro elements.“There is a risk who you do business with and who you partner with, which is something we factor in,” he said.“South Africans are used to working in private equity, but that is not the case in sub-Saharan Africa. A lot of businesses are used to the old way of doing business, but they need to start looking to the new way. It’s a challenge for them and that’s the execution risk.“Then there is the risk of economy and infrastructure and political risk that goes with doing business in sub-Saharan Africa,” he said. “You also have depreciation issues, but the underlying growth exceeds most of those risks.”Source: News24Wire
HTTPS Enabled by DefaultFor an additional layer of security, all Wave traffic is by default encrypted via HTTPS, a protocol for secure communications. That represents a big change in Google’s standard policy regarding use of this protocol. It wasn’t until July of 2008 that Gmail users were even given the option to encrypt messages using SSL and to enable it, you had to go into your settings and make a change – something that most mainstream users would never have bothered with. By the end of 2008, Google was only offering SSL as a feature in its other Google Apps programs if users were on either the Premier or Education editions. That meant that for non-paying consumer users, Google Docs, Calendar and other online offerings were only available via unencrypted HTTP sessions. Today, little has changed. Still, only users of Premier and Education Editions have access to SSL and it’s not switched on by default. The protocol is now available for Gmail, Chat, Calendar, Docs and Sites but not the Start page, Google Video or the Google Talk desktop client. Consumers using free Google apps like Docs still don’t have SSL unless they type it in the address bar manually. D’alesandre admitted that switching on encryption in Wave slows down the service a little (which probably explains the company’s hesitance to switch it on in other products, too), but they ultimately decided that the security it provides was worth it. Whitelisting Kills the NoiseA third security feature of sorts coming to Wave in the future is the ability to do “whitelisting.” Wave users will be able to select which people they want to collaborate with and place them on a whitelist of approved persons. Only those who are on the list will be able to contact you via Wave and everyone else will be ignored. That feature should certainly help to address the concerns certain folks have about Wave’s “noise level,” to some, an overwhelming amount of activity that led them to call out Wave as a distraction and a time-waster instead of the futuristic productivity product it intends to be. By allowing those who can’t seem to embrace Wave’s cacophony the ability to limit their collaborators, Wave could transfer from noisy attention killer to useful tool in an instant. Of the three features, the first two are already in place. No date was given on the whitelisting feature, only that it will be “coming soon.” Tags:#Google#news#NYT#Product Reviews#web Top Reasons to Go With Managed WordPress Hosting sarah perez Related Posts A Web Developer’s New Best Friend is the AI Wai… 8 Best WordPress Hosting Solutions on the Market Why Tech Companies Need Simpler Terms of Servic… Google Wave, the company’s new real-time collaboration platform currently in private beta, is more secure than traditional email, claims the company. According to Greg D’alesandre, Google Wave product manager, that’s because Google has focused on addressing privacy and security issues as the product was built from the ground up instead of waiting to deal with them later. Speaking to media in Sydney today, he detailed several of Wave’s security features which are meant to stop criminals from exploiting the new technology and harming Wave users. Built In Features to Prevent SpoofingAs reported by Australian news outlet ITNews, Wave has multiple levels of security which are designed to prevent email spoofing. Spoofing, meaning when you receive an email that claims to be from either a person or company you know but is actually from someone else – a hacker in most cases. D’alesandre says the Wave protocol is more secure because it includes something he jokingly refers to as “crypto fairy dust.” That’s obviously meant to be a simple and fun way to explain the security complexities built into Wave which involve detailed authentication mechanisms to keep users safe from malicious attacks. In Wave, every bit of info you receive from another Wave user has already been authenticated as to its origin so you can be assured that they are who they say they are. “You know you are getting the Wave from the person that is sending it to you and it has not changed mid-stream. This is a very big problem in current communication technologies – data can be changed mid stream and you will never know,” said D’alesandre.
Tags:#cloud#cloud computing#Data Centers Earlier this year we wrote about an infographic that visualized what 10 petabytes looks like.An infographic tonight by the team at Focus takes a different look at how data is defined.The Focus infographic “uses measures only associated with data.” Quantities of data are discussed in abstract terms.Larger sizeWhat do you think? Does the Focus infographic clearly illustrate how data is defined? Top Reasons to Go With Managed WordPress Hosting Cloud Hosting for WordPress: Why Everyone is Mo… alex williams Serverless Backups: Viable Data Protection for … How Intelligent Data Addresses the Chasm in Cloud Related Posts
Guide to Performing Bulk Email Verification Tags:#twitter#web A Comprehensive Guide to a Content Audit The Dos and Don’ts of Brand Awareness Videos Related Posts The number of people who have registered accounts on Twitter has now surpassed 200 million, a representative of the company said publicly yesterday. Katie Stanton, Twitter’s VP of International Strategy, said at the Guardian Activate conference in New York that there are now more than 200 million Twitter accounts worldwide and more than 70% of all Twitter’s traffic comes from outside the U.S. That means Stanton’s job is very important and whatever her International Strategy is, it seems to be working. Roughly 25% of all tweets come from Japan alone, she said.Twitter said last month that it was adding just under a half million new accounts per day and its business-facing page says there are 175 million registered users.Independent research by the blog Business Insider concluded last month that there are 56 million people who are following 8 or more other people on Twitter, a number that site used to estimate the number of active Twitter users. Twitter typically doesn’t comment on third party estimates regarding its users or their engagement. The company is focused on a long-term strategy of emphasizing read-only use cases for its service, as a means of increasing its user retention. Facebook’s user numbers are several times higher than Twitter’s.The Huffington Post’s Bianca Bosker wrote first about Stanton’s public statement of 200 million registered users last night. We’ve requested comment from Twitter PR and have not yet received a reply; presumably the company will make a formal announcement soon. Update: Twitter PR responded to my inquiry by email just as I put this post up and said that this was not the first time these numbers have been discussed publicly and that neither number is in fact new. If anyone can find me a link to a report online about that number as a fact and not an estimate or fast-approaching number of registered users – I will say very nice things about you. I certainly looked, myself, before writing this.You can follow me on Twitter here and the whole ReadWriteWeb team here. Only a portion so far of the 200 million people registered to use Twitter have discovered the joy that comes from engaging with us on Twitter yet, but we consider each and every one of you who has to be absolutely precious. New friends should come and join us there; we can chat about current events, fascinating things we find to read online and what we all ate for breakfast. Facebook is Becoming Less Personal and More Pro… marshall kirkpatrick
Here’s a quick introduction to the four main types of tools you need to know in order to calibrate footage, sound, and monitors.Whether you’re in the field or the in editing bay, your eyes and ears alone are never enough to accurately judge the fidelity of your images — or sound. In this article, we’re taking a look at the four main types of tools you’ll need to know how to use to properly calibrate your footage, sound, and monitors.First, you need to calibrate all of the cameras and monitors on-set and in the editing studio in order to be certain that all alterations made to the footage aren’t based on improperly calibrated recorders — or monitoring devices.Proper calibration starts before you roll the cameras. Everyone knows how to hold a card in front of the camera for White Balance, but let’s take a look at the three main types of balance cards, and what they do best.Balance CardsImage via ChromLives.WhiteWhite cards are — well — white. Pure white to be exact. When setting the white balance to a white card, the camera (or processing software) should be able to properly calibrate the ambient light of a shot, along the blue-yellow axis of the color wheel. White balancing the image against any color, other than pure white, can alter the accuracy of the colors in the image, introducing headaches to the post pipeline (unless you’re doing it on purpose).Using a pure white card also aides in exposure. If the card is blown out, it’s likely your shot is overexposed. To use a white card, hold it in the key light of the shot, navigate to the manual white balance setting on your camera, aim the crosshair (or indicator) at the card, and set the white balance — it’s that easy.GrayFor exposure, grey cards are the better tool. Gray cards are a specific hue and brightness referred to as neutral gray — or 18% gray. The “18%” or “neutral” refers to the luminance value of the gray, which is essential for a number of important filmmaking tools.The luminance of the average light in an average scene, averaged together, results in a value of 18%. Because of this, camera sensors are tuned to produce proper exposure at 18% gray. The power of the gray card is its utility in dialing in contrast ratios, lighting setups, and other exposure applications.To use gray cards, have someone position the card in important areas of light (in the scene) while you adjust camera settings, meter the light, or otherwise instruct your crew. Through the viewfinder, you know that you are looking at an 18%, neutral grey reference. Therefore, dialing in an accurate look is as simple as adjusting light in a given area, or simply stopping up or down.While contentious on the internet, white balancing on a gray card can skew the final balance of the shot. The inaccuracy might not be immediately obvious. However, when setting the white balance, the camera (or software) is outputting results based on an assumed input of white, or the closest thing to white in the frame. Even though neutral gray won’t skew your color balance significantly, it will affect the output of the pixels on the other side.Color ChartImage via X-Rite.Color Charts are commonly referred to as color chips, passports, or checkers, even though they serve a similar function to the balance cards. Although they are the most expensive of the three (at about $100), a color chart can largely replace gray and white cards, if it also has proper white and grey palettes.Color charts are comprised of 16 color palettes — pure black, pure white, and 18% gray portions — in addition to a number of other various tools, depending on the model.To use a color chip most effectively, roll with it positioned in the key light of the shot (for a few seconds). If someone’s holding it, have them rotate it from left to right a few times, making sure to get multiple frames without any glare on the chip.With each lighting setup, if you capture a few seconds of footage of the chart in the key light, you can save dozens (if not hundreds) of hours in the post-production process. The DIT, or colorist, loads the footage into DaVinci Resolve by simply lining up the boxes on the color chart with an overlay. Then, Resolve automatically matches the colors in the chart (in the footage) with their known values. This essentially bypasses the tedious correction phase of the grading workflow, while ensuring highly accurate colors.Test ToneOne of the simplest and most useful calibration tools is the Audio Tone Generator. Although there are a myriad of tones for specific sound applications, the most commonly used for calibration in video production is 1000 Hz at -20 dB.Tone calibrates the recording and playback devices to a standard loudness rating that ensures consistent sound levels, regardless of the final playback device.To use reference tone, simply play it (or otherwise generate it) and raise the levels until they reach the desired level on the meters. This is usually 0 dB, but different generators and programs have different target thresholds. Premiere, for instance, generates its bars at -12 dB. Regardless of where you set your levels, the point remains the same — tone ensures that your playback devices are set with enough headroom in the mix to not clip and cause uncomfortable crackling in the speakers during playback.SMPTE Color BarsColor bars are one of the most easily recognizable calibration tools through their widespread use in televisions, cameras, and other recording and playback devices — since 1956. Though modern color bars have added a few features, their form and function have largely remained the same for 63 years.Color Bars break down into three vertical sections, but all of these sections are used for calibrating color, contrast, and brightness. While the color bars are simple enough to understand, there are a few sections of the bars that might not be intuitive without prior experience calibrating with bars.PLUGE BarsThe three, small rectangular stripes below the red bar are referred to as the “PLUGE” bars. PLUGE is short for Picture Line-Up Generating Equipment, and you use them in conjunction with the white square in the bottom portion of the bars to properly adjust the shadows and black levels of very low exposure levels on the screen.-I and +QThe other portions of the bars worth mentioning are the dark blue and purple squares in the bottom left of the pattern. You use these blocks to decode the color portion of a video being broadcast for television using NTSC (North and Central America, Japan). -I is an abbreviation of “-In-Phase,” while +Q means “Quadrature.” For most people using color bars, these portions can be skipped.Here is a video showing the full process of calibrating a monitor using SMPTE Color Bars from Lynda:If you prefer a written walkthrough, PremiumBeat’s Rachel Klein has a fantastically detailed article on calibrating monitors using color bars. We’ll list the steps here as well:Generate or load Color BarsDim the lights and make sure there are no reflections on your monitorSet “Contrast” or “Picture” to the middleBring “Chroma” or “Color” to zeroAdjust “Brightness” until the middle PLUGE bar just barely disappears (the far right PLUGE bar should just barely be visible)Bring “Contrast” or “Picture” up to 100, then lower until the white square in the bottom right just begins to be affectedBring “Chroma” or “Color” back up until the colors no longer bleed, and the edges between rectangles are clearly definedAdjust “Hue” until the yellow bar is lemony and the magenta bar is pure magenta (not skewing red or purple)With a basic understanding and application of these four tools, you can rest soundly after a long shoot, knowing that when you load your picture and sound into your editing program in the morning — everything will look and sound just the way you wanted.Cover image via antb.Looking for more articles on film and video production? Check these out.How to Perfectly Position Your Content in Premiere Pro 2019The Essential Guide to Finding Deals on Video Production GearMulti-Camera Direction Tips for Properly Shooting Live EventsThe 6 Best Filmmaking Cameras Under $1,000How to Use These 4 Free Animated Texts in Your Videos
Maharashtra has sought increased financial assistance from the Centre for the next five years to tackle drought. Presenting a charter of demands at the fifth governing council meeting of Niti Aayog , Chief Minister Devendra Fadnavis urged the Centre to step in with increased assistance to help transport water from the Godavari basin criss-crossing Maharashtra, Telangana, Andhra Pradesh, Chhattisgarh and Odisha back into Marathwada and Vidarbha. “My government’s effort is to bring more water from Godavari into stressed areas of Marathwada and Vidarbha to make them free of scarcity in the future. This is my government’s priority for the next five years, and since Centre has assisted us last five years, I have forwarded similar demands to the Niti Aayog,” Mr. Fadnavis said in New Delhi on Saturday. The Maharashtra delegation — in back-to-back meetings with Hardeep Puri, Union Minister for Housing and Urban Affairs, Civil Aviation, Commerce and Industry; Gajendra Singh Shekhawat, Jal Shakti Minister; and Dharmendra Pradhan, Minister of Petroleum, Natural Gas and Steel — discussed pending issues related to the Nagpur airport, Mumbai airport, Nagpur Metro phase 2 and PMAY (Pradhan Mantri Awas Yojna (Urban), senior officials said. The CM sought central assistance for the Marathwada water grid, river liking projects on Godavari with Mr. Shekhawat. “We made a detailed presentation to him (Mr Shekhawat) Other meetings focussed on water, agriculture and market reforms, including six to seven other concerns that the State faces. We have appraised various ministries of the work Maharastra has done in water resources and agri sector,” the CM said. The agenda of the governing council was focused on internal security, development in left wing extremism (LWE) areas, reforms of the Agriculture Produce Marketing Committee (APMC) Act and Essential Commodities Act, 1955. Other issues discussed were progress of the Aspirational Districts Programme, launched in January 2018, and ways for encouraging water conservation through rain water harvesting, said officials. Maharashtra has declared 151 talukas as drought affected and is getting a central assistance of ₹4,714 crore. The government had deployed 5,493 tankers to provide water to 4,331 villages after 151 talukas were declared drought-hit. The Opposition alleged that the State government is purposely restricting water supply through tankers in areas where the scheme has not performed well.