Information, Interaction, and Influence – Information intensive research initiatives at the University of Chicago

Sam Volchenbaum 

Center for Research Informatics – established in 2011 in response of need for researchers in Biological Sciences for clinical data. Hired Bob Grossman to come in and start a data warehouse. Governance structure – important to set policies on storage, delivery, and use of data. Set up secure, HIPAA and FISMA compliance in data center, got certified. Allowed storage and serving of data with PHI. Got approval of infrastructure from IRB to serve up clinical data. Once you strip out identifiers, it’s not under HIPAA. Set up data feeds, had to prove compliance to hospital. Had to go through lots of maneuvers. Released under open source software called I2B2 to discover cohorts meeting specific criteria. Developed data request process to gain access to data. Seemingly simple requests can require considerable coding. Will start charging for services next month. Next phase is a new UI with Google-like search.

Alison Brizious – Center on Robust Decision Making for Climate and Energy Policy

RDCEP is very much in the user community. Highly multi-disciplinary – eight institutions and 19 disciplines. Provide methods and tools to provide policy makers with information in areas of uncertainty. Research is computationally and information intensive. Recurring challenge is pulling large amounts of data from disparate sources and qualities. One example is how to evaluate how crops might fail in reaction to extreme events. Need massive global data and highly specific local data. Scales are often mismatched, e.g. between Iowa and Rwanda. Have used Computation Institute facilities to help with those issues. Need to merge and standardize data across multiple groups in other fields. Finding data and making it useful can dominate research projects. Want researchers to concentrate on analysis. Challenges: Technical – data access, processing, sharing, reproducibility; Cultural – multiple disciplines, what data sharing and access means, incentives for sharing might be mis-aligned.

Michael Wilde – Computation Institute

Fundamental importance of model of computation in overall process of research and science. If science is focused on delivery of knowledge in papers, lots of computation is embedded in those papers. Lots of disciplinary coding that represents huge amounts of intellectual capital. Done in a chaotic way – don’t have a standard for how computation is expressed. If we had such a standard could expand on the availability of computation. We could also trace back what has been done. Started about ten years ago – Grid Physics Netowrk to apply these concepts to the LHC, the Sloan Sky Survey, and LIGO – virtual data. If we shipped along with findings a standard codified directory of how data was derived, could ship computation anywhere on planet, and once findings were obtained, could pass along recipes to colleagues. Making lots of headway, lots of projects using tools. SWIFT – high level programming/workflow language for expressing how data is derived. Modeled as a high level programming language that can also be expressed visually. Trying to apply the kind of thinking that the Web brought to society to make science easier to navigate.

Kyle Chard – Computation Institute

Collaboration around data – Globus project. Produce a research data management service. Allow researchers to manage big data – transfer, sharing, publishing. Goal is to make research as easy as running a business from a coffee shop. Base service is around transfer of large data – gets complex with multi-institutions, making sure data is the same from one place to the other. Globus helps with that. Allow sharing to happen from existing large data stores. Need ways to describe, organize, discover. Investigated metadata – first effort is around publishing – wrap up data, put in a place, describe the data. Leverage resources within the institution – provide a layer on top of that with publication and workflow, get a DOI. Services improve collaboration by allowing researchers to share data. Publication helps not only with public discoverability, but sharing within research groups.

James Evans – Knowledge Lab

Sociologist, Computation Institute. Knowledge Institute started about a year ago. Driven by a handful of questions: Where does knowledge come from? What drives attention, imagination? What role does social, institutional play in what research gets done? How is knowledge shared? Purpose to marry questions with the explosion of digital information and the opportunities that provides. Answering four questions: How do we efficiently harvest and share knowledge harvested from all over?; How do we learn how knowledge is made from these traces?; Represent, recombine knowledge in novel ways; Improve ways of acquiring knowledge. Interested in long view – what kinds of questions could be asked? Providing mid-scale funding for research projects. Questions they’ve been asking: How science as an institution thinks and how scientists pick the next experiment; What’s the balance of tradition and innovation in research? ; How people understand safety in their environment, using street-view data; Taking data from millions of cancer papers then drive experiments with a knowledge engine; studying peer review – how does review process happen? Challenges – the corpus of science, working with publishers – how to represent themselves as safe harbor that can provide value back; how to engage in rich data annotations at a level that scientists can engage with them?; how to put this in a platform that fosters sustained engagement over time.

Alison Heath – Institute for Genomics and Systems Biology and Center for Data Intensive Science

Open Science Data Cloud – genomics, earth sciences, social sciences. How to leverage cloud infrastructure? How do you download and analyze petabyte size datasets? Create something that looks kind of like Amazon or Google, but with instant access to large science datasets. What ideas to people come up with that involve multiple datasets. How do you analyze millions of genomes? How do you protect the security of that data? How do you create a protected cloud environment for that data? BioNimbus protected data cloud. Hosts bulk of Cancer Genome Project – expected to be about 2.5 petabytes of data. Looked at building infrastructure, now looking at how to grow it and give people access. In past communities themselves have handled data curation – how to make that easier? Tools for mapping data to identifiers, citing data. But data changes – how do you map that? How far back do you keep it? Tech vs. cultural problems – culturally has been difficult. Some data access controlled by NIH – took months to get them to release attributes about who can access data. Email doesn’t scale for those kinds of purposes. Reproducibility – with virtual machines you can save the snapshot to pass it on.

Discussion

Engagement needs to be well beyond technical. James Evans engaging with humanities researchers. Having equivalent of focus groups around questions over a sustained period – hammering questions, working with data, reimagining projects. Fund people to do projects that link into data. Groups that have multiple post-docs, data-savvy students can work once you give them access. Artisanal researchers need more interaction and interface work. Simplify the pipeline of research outputs – reduce to 4-5 plug and play bins with menus of analysis options. Alison – helps to be your own first user group. Some user communities are technical, some are not. Globus has Web UI, CLI, APIs, etc. About 95% of community use the web interface, which surprised them. Globus has a user experience team, making it easier to use. Easy to get tripped up on certificates, security, authentication – makes it difficult to create good interfaces. Electronic Medical Record companies have no interest in being able to share data across systems – makes it very difficult. CRI – some people see them as service provider, some as a research group. Success is measured differently so they’re trying to track both sets of metrics, and figure out how to pull them out of day-to-day workstreams. Satisfaction of users will be seen in repeat business and comments to the dean’s office, not through surveys. Doing things like providing boilerplate language on methods and results for grants and writing letters of support go a long way towards making life easier for people. CRI provides results with methods and results section ready to use in paper drafts. Should journals require an archived VM for download? Having recipes at right level of abstraction in addition of data is important. Data stored in repositories is typically not high quality – lacks metadata, curation. Can rerun the exact experiment that was run, but not others.  If toolkits automatically produce that recipe for storage and transmission then people will find it easy.

 

Information, Interaction, and Influence – Digital Science demos

Digital Science is a UK company that is sponsoring this workshop, and they’re starting off the morning by demoing their family of products.

Julia Hawks – VP North America, Symplectic

Founded in2003 in London to serve the need of researchers and research administrators. Joined Digital Science in 2010. Works with 50 universities – Duke, UC Boulder, Penn, Cornell, Cambridge, Oxford, Melbourne.

Elements – research information management solution. Capturing and collating quality data on faculty members to fulfill reporting needs: annual review, compliance with open access policies, showcasing institutional research through online profiles, tracking the impact of publications (capture citation and bibliometric scores). Trying to reduce burden on faculty members.

How is it done? – automated data capture, organize into profiles, data types configurable, reporting, external systems for visibility (good APIs).

Where does the data come from? External sources – Web of Science, Scopus, host of others, plus internal sources from courses, HR, grants.

Altmetric -

Help researchers find the broader impact of their work. Collect information on articles online in one place. COmpany founded in 2011 in London. Work with publishers, supplying data to many journals, including Nature, Science, PNAS, JAMA. Also working with librarians and repositories. Some disciplines have better coverage than others.

Altmetric for institutions – allows users withinn an institution to get an overview of the attention research outputs are getting. Blogs, mainstream media, smaller local papers, and news sources for specific verticals, patents, policy documents.

Product built with an API to feed research information systems, or have a tool called Explorer to browse bibliographies.

ReadCube

Build tor researchers, but also have products for publishers and institutions. Manages publications and articles for reading. Manages a library of PDFs. Has highlighting, annotations, reference lookup. Recommends other articles based on articles in your library.

ReadCube for Publishers – free indexing and discovery service, embedded PDF viewer + data, Checkout – individual article level ecommerce.

ReadCube Access for Institutions – enables institutions to close the collections gap with affordable supplemental access to content. Institutions can pick and choose by title and access type.

figshare – Dan Valin

Three offerings – researcher, publisher, institutions

Created by an academic, for academics. Further hte open science movement, build a collaborative portal, change existing workflows. Provides open APIs

Cloud-based research data management system. Manage research outputs openly or privately with controlled collaborative spaces. Public repository for data.

For institutions – research outputs management and dissemination. Unlimited collaborative spaces that can be drilled down to on a departmental level.

Steve Leicht – UberResearch

Workflow processes – portfolio analysis and reporting, classification, etc. Example – Modeling a classification semantically. Seeing difference across different funding agencies. Can compare different institutions, can hook researchers to ORCID.

 

Information, Interaction, and Influence – Research networking and profile platforms

Research networking and profile platforms: design, technology and adoption of 
networking tools 

Tanu Malik, UChicago CI – treating science as an object. Need to record inputs and outputs, which is difficult, but some things are relatively easy to document: publications, patents, people, institutions, grants. Some of this has been taking place, documenting metadata associated with science. How can we integrate this data and establish relationships in order to get meaningful knowledge out of it? There have been a few success stories: VIVO, Harvard Profiles. This panel will discuss the data integration challenges and the deployment challenges. Computational methods exist but still need to be implemented in easy to use ways.

Simon Porter – University of Melbourne

Implemented VIVO as Find an Expert – oriented towards students and industry. Now gets around 19k unique visitors per week.

Serendipity as research activity – the maximum number of research opportunities are possible when we can maximize the number of people discovering or engaging with our research. Enabled by policy, enabled by search, enabled by standards, enabled by syndication. 

At Australian universities have had to collect the information on research activity all along. Some of it is private, but some is public and the University can assert publication of it.  Most universities have something, but lots of different systems.

Only a small number of people will use the search in your system. Most will come from Google. 

Syndicating information within the university – VIVO – gateway to information – departments take information from VIVO to publish their own web pages. Different brands for different departments. 

Syndication beyond the University – Want to plug into international research profiling efforts. 

New possibilities: Building capability maps. How to support research initiatives. Start from people being appointed to start the effort. Use Find An Expert to identify potential academics. Can put together multiple searches to outline capability sets. Graphing interactions of search results. 

Leslie Yuan – Clinical and Translational Science Institute – UCSF

The Profiles team all came from industry – highly oriented towards execution. When she started they wanted lots of people to use, so how to get adoption? If you build it, they probably won’t come. Use your data and analyses to drive success with a very lean budget. In four years went to over 90k visits per month. Gets 20% of the traffic of the main UCSF web page.

Tactics:

1. Use Google (both inside and outside the institution).  Used SEO on site. 88% of researcher profiles have been viewed 10+ times. Goal was to get every one of researchers to come up in top 3 results when they type the name in. Partnered with University Relations – any article that the press office writes about a researcher links to their profile.

2. Share the data. APIs provide data to 27 UCSF sites and apps. Has made life easier for IT people across the university, leading to evangelization in the departments. Personalized stats are sent to profile owners – how many times your profile was viewed within the institution, from other universities, from major pharmas. People wanted specifics. Nobody unsubscribed. Vanity trumps all.  Research analytics shared with leadership. Helped epidemiology and biostatistics show that they are the most collaborative unit on campus.

3. Keep looking at the data – monthly traffic reporting, engagement stats (by school, by department, who’s edited profile, who’s got pictures), Network visualizations of co-authorships.

4. Researcher engagement – automated onboarding emails – automatically creating profiles, then letting people know about them as they come on board. Added websites, videos, tweets and more inline. Batch loaded all UCTV videos onto people’s profiles, then got UCTV to send email to researchers letting them now. Changed URLS – profiles.ucsf.edu/leslie.yuan 

5. Partnerships – University Relations, Development & Alumni, Library, UC TV, Directory,  School of Medicine, Center for AIDS research, Dept. of Radiology. Was able to give data back to Univ Relations on articles by department or specialty, which they weren’t tracking. Automatic email that goes out if people get an article added. 

Took 8 or 9 months of concentrated conversations with chairs, deans, etc to convince them that this was a good thing. Only 7 people asked to be taken off the system. Uptake was slow, but now people are seeing the benefit of having their work out there.  6 people on her team have touched the system in some way, but it’s nobody’s full-time job.

Griffin Weber, Harvard – Research Networking at the School, University, and Global Scale

Added passive and active networking to the profiles system. Passive network provided information that people hadn’t seen before, driving adoption, active networks allowed the site to grow over time. Passive network creates networks based on related concepts. Different ways of visualizing the concept maps – list, timeline, co-authors, geography (map), ego-centric radial graph (social network reach), list of similar people

Different kinds of data for Harvard Faculty Finder – comets and stars discovered, cases presented to the Supreme Court, classes taught, etc. Pulled in 500k publications from Web of Science. Derived ontologies in 250 disciplines across those publications using statistical methods. 

Direct2experts.org – federated search across 70 biomed institutions. 

Faculty affairs uses Profiles to form promotions committees, students using it to find mentors. 

Bart Trawick, NCBI – NLM – Easy come, easy go; SciENcv & my bibliography 

NIH give $15.5 in grants per year. Until 2007 didn’t have a way of seeing what they were getting from the investment. Public access to publications mandated by Congress in 2007. Started using MyBibliography to track. Over 61k grant applications coming in every year, just flat PDFs. 

About 125k US trained scientists in the workforce now. Many have been funded by training grants. Want to see how the scientists continue their career. Over 2500 unemployed PhDs in biomedical science.

My NCBI Overview – tools and preferences integrated with NCBI databases. Connected to PubMed, genomics, etc. Uses federated login (can link google accounts e.g.) Can link ERA commons account – pull in information about profiles, grants linked. 

My Bibliography – make it a tool to capture information and link grant data to publications. Set up to monitor many of the databases that information flows through. End result of public access policy is that all NIH-funded research publications get deposited in PubMed Central. MyBibliograhpy lets scientists know if they’re compliant with policy. Send structured data back out to PubMed, allowing searching by grant numbers, etc. 

SciENcv – released second version this week. Help scientists fill out profile – each agency has their own biosketch format. SciENcv is attempt to standardize that. NIH set up, working on others, NSF next on list. Wanted to make it easy for researchers who are already funded and using MyBibliography. Data exists out there – would like to get to a point of reuse of data for grant reporting. Added inputs – ORCID, eRA Commons (used to manage grants), MyBibliography. Grants.gov requires biosketches in PDF. Can export from SciENcv in pdf to grants.gov, with rich metadata attached.

Information, Interaction, and Influence 

 I’m attending a workshop on Research Information Technologies and their Role in Advancing Science.

Ian Foster from the UChicago Computation Institute is kicking it off. 

We now have the ability to collect data to study how science is conducted. We can also use that data for other purposes: finding collaborators, easing administrative burdens, etc. Those are the reasons we often get funding to build research information systems, but can use those systems to do more interesting things.

Interested in two aspects of this:

1. Treat science itself as an object of study.
2. Can use this information to improve the way we do science. Don Swanson – research program to discover undiscovered public knowledge. 

The challenge we face as a research community as we create research information systems is to bring together large amounts of information from many different places to create systems that are sustainable, scalable, and usable. Universities can’t build them by themselves, and neither can private companies. 

Victoria Stodden – Columbia University (Statistics) – How Science is Different: Digitizing for Discovery

Slides online at: http://www.stanford.edu/~vcs/talks/UCCI-May182014-STODDEN.pdf

Tipping point as science becomes digital – how do we confront issues of transparency and public participation? New area for scientists. Other areas have dealt with this, but what is different about science?

1. Collaboration, reproducibility, and reliability: scientific cornerstones and cyberinfrastructure

Scoping the issue – looking at the June issues of Journal of American Statistical Association – how computational is it? Is the code available? 1996 – about half computational, by 2009 almost all computational. In ’96 none talked about getting the code. In 2011, 21% did. Still 4 out of 5 are black boxes. 

In 2011 ? looked at 500 papers in biological sciences. Was able to get data in 9% of the cases.

The scientific method:

Deductive: math, formal logic; Empirical (or inductive): largely centered around statistical analysis of controlled experiments. Computational, simulations, data-driven science, might be 3rd and 4th branches. The Fourth Paradigm.

Credibility Crisis: Lots of discussion in journals and pop press about dissemination and reliability of scientific record. 

Ubiquity of Error: central motivation of scientific method – we realize that our reasoning may be flawed, so we want to hit it against evidence to get closer to the truth. In deductive branch, we have proofs. In empirical branch, we have the machinery of hypothesis testing. Hundreds of years to come up with standards of reliability and reproducibility. The computational aspect is only a potential new branch, until we develop comparable standards. Jon Clairbout (Stanford): “Really reproducible Research” – an article about computational science is merely the advertisement of the scholarship. The actual scholarship is the set of code and data that generate the article.

Supporting computational science: Dissemination platforms; Workflow tracking and research environments (prepublication); embedded publishing – documents with ways of interacting with code and data. Mostly being put together by academics without reward because they thing these are important problems to solve. 

Infrastructure design is important and rapidly moving forward.

Research Compendia – a site with dedicated pages which house code and data, so you can download digital scholarly objects. 

2. Driving forces for Infrastructure Development

ICERM Workshop Dec 2012 – reproducibility in computational mathematics. Workshop report that was collaboratively written by attendees. Tries to lay out guidelines for releasing code and data when publishing results. Details about what needs to be described in the publication. 

3. Re-use and re=purposing: Crowd sourcing and evidence-based-***

Reproducible Research is Grassroots.

External drivers: Open science from the Whitehouse. OSTP Exec memorandum: federal funding agencies to submit plans within 6 months to say how they will facilitate access to publications and data; in May order to federal agencies doing research directing them to make data publicly available. Software is conspicuously absent. Software has different legal status than data – makes it different than data for federal mandating – Bye Dole act, allowing universities to claim patents on software.

Science policy in congress – how do we fund science and what are the deliverables? Much action around publications.

National Science Board 2011 report on Digital Research Data Sharing and Management

Federal funding agencies have had a long-standing commitment to sharing data and (to a degree) software. NSF grant guidelines expect investigators to share with other researchers at no more than incremental cost, data. Also encourages investigators to share software. Largely unenforced. How do you hold people’s feet to the fire when definitions are still very slippery. NIH expects and supports timely release of research data for bigger grants (over $500k). NSF data management plan – looks like it’s trying to put meat on the bones of the grant guidelines.

2003 Natioanl Academies report on Data Sharing in the Life Sciences. 

Institute of Medicine – report on Evolution of Translational Omics: Lessons Learned and the Path Forward. When people tried to replicate work they couldn’t , and found many mistakes. How did work get approved for clinical trial? New standards were recommended. Reccomends standards for locking down software.

Openness in Science: Thinking about infrastructure to suppor this – not part of our practice as computational scientists. Having some sense of permanence of links to data and code. Standards for sharing data and code so they’re usable by others. Just starting to develop.

Traditional Sources of Error: View of the computer as another possible source of error in the discovery process. 

Reproducibility at Scale – May take specialized hardware and long run times? How do we reproduce that? What do we do with enormous output data?

Stability and Robustness of Results : Are the results stable? IF I’m using statistical methods, do they add their own variability to the findings?

The Larger Community – Greater transparency opens scholarship to a much larger group – crowd sourcing and public engagement in science. Currently most engagement is in collecting data, much less in the analysis and discovery. How do we provide context for use of data? How do we ingest and evaluate findings coming from new sources? New legal issues – copyright, patenting, etc. Greater access has possibility of increasing trust and help inform the debates around evidence-based policy making. 

Legal Barriers – making code and data available. Immediately run into copyright. In US there is no copyright on raw facts, per Supreme Court. Original selection and arrangement of facts is copyrightable. Datasets munged creatively could be copyright, but it’s murky. Easiest to put data in public domain, or use CC license. Different in Europe.  GPL – includes “sharealike” preventing from using open source code in proprietary ways. Science norms are slightly different – share work openly to fuel further development wherever it happens. 

CSG Spring 2014 – Analytics Discussion

ECAR Analytics Maturity Index – could use it to assess which group to partner with to judge feasibility. 

NYU started analytics several years ago and chose certain kinds of data. 

Dave Vernon – Cornell
Hopes and dreams for the Cornell Office of Data Architecture and Analytics (ODAA)
Curent state fof data usability at Cornell: like a library system with hundreds of libraries, each with unique catalog systems (if any), each requiring esoteric knowledge, each dependent on specialists who don’t talk to each other.

Traditional “BI” -not analytics but report generation. Aging data governance.

ODAA – to support Cornell’s mission by maximizing the value of data resources. Act as a catalyst/focal point to enable access to teaching, research, and admin data. Acknowledge limited resource, but will attempt to maximize value of existing resources.

Rethink governance: success as the norm, restrictions only as needed? Broad campus involvement in data management – “freeing” of structured / unstructured data. Stop arguing over tools: OBIEE vs Tableau, etc. Form user groups – get the analysts talking. 

Service Strategy: Expand Institutional Intelligence initiative: create focused value from a select corpus of admin data (metadata, data provenance, governance, and sustainable funding). Cost recovered reporting and analytics services. User groups, consultants, catalog and promulgate admin and research data resources. 

Resource strategy: What do you put together in this office? Oracle people, reporting people. Re-aloacate savings. Add skilled data and analytics professionals. Modest investment in legacy tool refresh. People are getting stuck in that discussions of tools.

Measures of Success: ODAA becomes a known and trusted resource. Cultural evolution – open not insular. Data becomes actionable, self-service. Broad campus involvement data management, “freeing” of data – have to work on data stewards to convince them that they have to make a compelling argument to keep data private. Continued success of legacy services.

At NYU IR owns the data stewardship and governance, but there is a group in a functional unit (not IT) that acts as the front door for data access. Currently just admin data focus, but growing out of that. Two recent challenges – student access to data (pressing governance structure), and learning analytics (people want access to LMS click streams – what about privacy concerns?).

Stanford – IR group reports to provost (like 15 people) do admin data. Group reports to dean of research for research data. Teaching & learning under another VP. Groups specialize, reducing conflict. Data scientists are part of those groups. 

Washington spinning up data science studio with research people, IT, library people as a physical space for people to collocate. 

Jim Phelps – can we use the opportunity of replacing ERPs to have the larger discussion about data access and analytics?

Notre Dame halted BI effort to go deeply into a data governance process, and as part of that are getting a handle on all of the sets of data they have. Building a data portal that catalogs reports. More a collection of data definitions rather than a catalog of data. data.nd.edu A concept at this point, but moving in that direction. Registry of data – all data must be addressable by url. Catalog shows existing reports, showing shat roles are necessary to get access. Terms used in the data are defined. 

Duke – Not hearing demand for this on campus, but getting good information on IT activity using Splunk on data streams. Could get traction by showing competencies in analysis.

At NYU had a new VP for Enrollment Management who had lots of data analysis expertise, who wowed the Board with sophisticated analyses, driving demand for that in other applications. 

Data Science Venn diagram – http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram

Dave Vernon – There’s an opportunity to ride the big data wave to add value by bringing people together and getting the conversations going and make people feel included. 

How are these teams staffed? Can’t expect to hire someone who knows all the pieces, so you have to have cross-functional teams to bring skills together. Michigan State has a Master’s program in analytics, so bringing some good people in there. Last four hires at Notre Dame have been right off the campus. Now have 8 FTE in BI space. 

CSG Spring 2014 – Organizational and staff development, cont.

Bill Clebsch – Stanford – Focus on talent

Recruit, review, reward, renew

The only people on your campus who understand recruiting are athletics.

You don’t get good people by putting out job postings. You get talent from networks. Listen to headhunters.

Development – we can develop people. ITLP is important for creating a language and lexicon for how we do things. Developed STLP for individual contributors at Stanford. Includes central and distributed IT. 

Steve Fleagle – Iowa

IT staff don’t get formal training in soft skills, change management, project management. Participated in ITLP program and saw very good results. Can only send 5-6- people per year. Brought Mor on campus and put lots of people through it and that has been very successful. Has led to IT people being asked to lead in non-IT situations. 

Emily Deere – UCSD

Less permanent funding and more temporary funding. Leads to more contractor hiring and less permanent staff. Not accepting any more temporary money for contracts. Now campus is supporting more permanent funding for IT. Pushing contractors to not have buyouts so they can bring the contractors they want in on a permanent basis. At Stanford desktop support has been growing 20% per year, but only hire people as contractors to start. Cut rate is about 50% before hiring permanently. 

Marin Stanek, Colorado, Boulder

Brought in a faculty member who is a proponent of Hoshin Planning (Single point of compass). Started OIT empowerment survey in summer of 2012. Asses 5 elements – do I have the authority, accountability, responsibility, knowledge, tools, to do my job. Students take it too. Confidential but not anonymous. Goal is 100% response rate and full empowerment. Had to set up a separate Qualtrics instance outside of campus and hotline outside of IT to overcome staff distrust. 

Each manager only gets access to the reports from their employees. 

CSG Spring 2014 – Organizational and Staff Development for an Unpredictable Future

Bernie Gulacheck – Minnesota

Approach:

  • Adjusting our Organizational Structures
  • Acquiring and Developing Talent
  • Understanding our Customers and our Staff 

Survey results 

Most people survey staff and customers annually, mostly through online surveys. 

15 out of 17 CIOs have been involved in an org analysis or redesign within the last year. Lot of people trying to flatten organization and/or redefining managerial roles. Some upswing in Matrix management and participatory management.

Lots of people are doing ITIL implementations. 

7 schools redesigning IT job classifications. 

Nobody said they’re decentralizing IT, but are trying more centralization and refocusing local IT on innovation, research, or curriculum. 

What’s driving change? Coalescence of technology-enhanced teaching and learning under the IT organization. Reducing layers of management. Efficiency, Rationalization, cost reduction. Or because of a new CIO.

3/4 of schools have a technical career path. Some track staff development in manager evaluations. 

Skills that have risen in importance in the last three years: Communication, cloud computing, IT architecture, project management, vendor management

What will rise in the next three years? Business analysis, cloud computing, consulting, data analysis, soft skills.

One person notes that some new hires lack curiosity about how the institution functions. 

Hard to hire positions: dbas, erp devs, info sec, java devs, network engineers, sys admins.

Post incident surveys for support cases are very popular with IT – are they popular with clients?

One campus comments about survey proliferation from different parts of IT. Some schools make every survey go through central communication for consistency. Some use institutional research office.

Stanford tries to make their survey so it can be finished in five minutes and they only include items that can be actionable. 

Forming IT Services: in 3 Acts Michigan State: Breandan Guenther & Tom Davis

Act 1: Cultures derives from two large sections of IT – academic and admin. lots of them/us language – didn’t feel like an organizational collaboration. Too many gaps. Spilled over to campus – not a lot of trust or desire to work together with central IT.

Act 2: Reorganizing – oriented IT towards customer constituencies. Service level at the top, with infrastructure at bottom and service units aligned vertically. Asked staff whether it should be a tune-up or an extreme makeover – staff wanted makeover. 

Moved to service centers & directors, one central HR team, centralized accounting, IT support, etc. 

Act 3: Recovering from resistance & rejection

Internal – resistance (Culture eats strategy for breakfast; Grieving; Insurgency; Orphans; Overload). 

Lessons from aftermath – ambivalence with central budgeting in CIO’s office; Financial management lag;  Accounting string ought to emphasize service portfolio equally with org structure. 

Matrix Organizational Model at Minnesota – Bernie Gulacheck

In spring of 2012 embarked on reorg of central IT shop. Moved towards more ITIL based framework. Explicitly calling out the demand side of IT – listening to the external community, governance, from the supply side provision of services.

Separated resource management from initiative and operational management. 

Complex part is understanding difference between service vs. function. Service = what we do. x-functional teams deliver customer facing services. Function – how we do it. Defined 23 business services that service the horizontals, each with has technical offerings (150-180). Security and Enterprise Architecture are off to the side – where resource manager is also the service directory. Line orgs: end user support, academic tech, infrastructure & production, application dev.

Challenge is to change staff perception that they look up for direction, but instead to look to the side. Resource manager is head coach. Service manager is quarterback – calls the plays.

Challenges & Benefits: Initially understanding the model; spans and layers (went from 72 managers to 20, some ended up as service directors); resource managers & evaluations; healthy tension (service directors are responsible for meeting budgets); duplication elimination; scalable. 

Formal communities of practice across the entire IT community – formal charge, beginning, and end. 

IT@Cornell – Job Family – Ted Dodds

It is 2nd largest job family – 755 people (Ithaca) 45/55 ration of center to units. CIT staffing reduced by 28% since 2009. Job family was reviewed and rationalized in 2011. 

Today IT expenditures are 90% on utilities, and10% on differentiators.  They aspire to a more equal balance. 

Skill Inventory/Assessment – by popular demand; self-assessment – technical and business, 80+% response, current state. Survey was not anonymous because want to give information to IT directors on campus. Now have a current state assessment, now developing direction on future skills needs. 

Next – Continue ITLP and ELP (Emerging Leaders Program); Drive other training programs by current/future gap; Actively manage attrition for whole IT job family.

 


Pages

Latest tweets

What I’m listening to


Follow

Get every new post delivered to your Inbox.