Here’s the slide deck from the Tableau Admin group meeting. Please let me know if there are questions. A few things to note: 1.) There is a complete ‘User’ module that does the following: add/remove… More
If you’re leveraging local authentication for Tableau Server, you might be wondering if there were an automated / ad hoc way to remove/update users to your server and its Sites (where appropriate). Sure, you can manually do it. But who wants to do that? No one. If you have a lot of activity, these one-offs are bound to add up and eat away both time and security.
Here’s a quick PowerShell function that will do that for you. This is designed to update users and assumes you already have an automated way of provisioning your new and removing your old (please say you do). Again, by ‘update’, I mean, remove the ability for a users to just go back to the site and select ‘Forgot your password’ after they have been terminated. That would be bad, especially if they have a valid email…
Get the code here.
Let me know if there are questions!
We’re excited to bring Tableau to Farmington!
Our first meeting is Thursday, February 23 @ 3:30PM MST. Please attend if you’re able. We’re planning on gearing these meetings around the applied, real-world use of Tableau so they should be a blast.
Here are the details:
See you there!
Thanks to all those attended and for those who had questions, please let me know if there’s anything I can help with.
Forget about perfection
Your data (information) is a set of free-flowing, dynamic instructions about how business (or whatever) is understood. It will never be perfect. In fact, you don’t want it to be; if it’s perfect (which, again, is impossible) there’s no room for improvement or self-reflection. What’s more, it will lack creative impulse: you don’t think freely if something is ‘perfect’ and done.
Data is fluid
The goal should be to bring the certification/governance process *to* the data. If you must wait, collect, meet, agree and on and on, there is a critical piece missing: data should be certified from what it produces (or its many derivations). How does this work? How can you certify a csv file? Simple: Alert, Integrate and Monitor your Analytics Infrastructure.
Essentially, if nothing is created from the data, why would there be a need to certify it? Once something is created, then you relentlessly certify, in flight, what is being produced. It’s a sort of fact-checking, data-alerting mechanism that’s completely possible with the right framework. Which leads to the next point…
Collect data about patterns of usage
If you’re not analyzing usage patterns, you’re missing valuable data. With all analytics, there are reasons for why (1) a specific set of data is selected and (2) what the user is attempting to do with the data. You can easily keep this metadata in AWS S3 (with a good lifecycle policy) or store for potential later use somewhere else. The point is that if you aren’t understanding *why* then you are only seeing one side of the coin.
Keep everything and then figure out what to do with it and then how to certify it.
Leverage the cloud
Don’t be constrained or afraid to combine pieces of cloud technologies to serve the Analytics structure.
Become durable / resilient
Even though there are very high monthly uptime percentages, just be prepared for something to break. If you do that, you’ll have even *more* creative freedom (crazy, huh?).
Choose to: (1) scale laterally or (2) scale vertically
This is all about re-framing the question around Projects vs Sites.
Why you have Sites over Projects or Projects over Sites? And that can’t be the only choice, right? (Hint: it’s not the only choice)
I’ve seen benefits to both but the extra work involved with Sites make scaling laterally (Sites) much more difficult than vertically (Projects), not to mention the challenges of stepping into the compliance realm.
Remove all the pieces from your base install that can be done elsewhere (eg: collect the ‘garbage’ but store on AWS S3 with a good lifecycle policy). That way, your Analytics infra is light and fast.
I challenge you to think *bigger* with Tableau. How can you provide more fluid access to insight than anything else?
I’m calling it: 2017 will not confuse Analytics with Reporting
We’ve got too much technology, tooling, and components to mix the 2 realms (hint: they’ve never been related…the ‘self-service’ myth hasn’t really separated them quite yet ).
Analytics has depth and is fluid. Reporting is rigid and superficial.
Here is a small example of what I mean. Your #Fitbit is more than a report. Think about that and shake your, er, data-maker 🙂
Look for more on this and other tech bits this year.
Happy New Year!
Ever created a wonderful Tableau dashboard with the added ‘Export to CSV’ functionality? We all have. Click the super-sleek Excel icon and, viola, the download begins. Send the file, walk away and think: ‘my, was that cool.’
But wait. You get an email complaining about column order. For some reason, the columns you’ve added, perfectly, are all messed up. In fact, some would say they’re in alphabetical order. What the?!
Anyway, here’s an easy PowerShell function that will fix that and, send the email with the columns in the correct order.
There are plenty of ways to make a good backup of your analytics content and the options available on Tableau Server are numerous. But, and here’s the better question: are they efficient and redundant enough?
Yes, current Server versions allow for n number of days of backup for content but this slowly increases the size of your backup and storage (and puts too many eggs in one basket). Plus, there’s no effective way to turn this option off if you have multiple sites (at least that I’m aware of). What’s more, you can get away with a great daily backup strategy and subsequent (automatic) restore to your development machine.
To add to the complexity, what if you don’t want to use the CLIs (tabcmd) to download massive data sources (>1GB)? What about users who, through no fault of their own, just click ‘download’ from the GUI? Do you, as Server admins, know the impact this has on the Server? Hint: you should and it’s bad.
Have users drop the name of their desired content in a shared file (or dedicated Slack channel) and then have daily backups done without using tabcmd or selecting ‘download’ from the GUI. Bonus: ship to AWS S3 and recover that space on your machine! Bigger bonus: logging.
Here’s what you’re going to do at a very high level:
- Write super SQL that can dynamically get all the info you need (twb/twbx/tds/tdsx)
- Use psql and the lo_export function to get this ^
- This ^ won’t get you the TDE (if there is one) so you need to find it on the filesystem
- Use the ‘extract’ table and get the UUID for where this ^ is stored in the filesystem
- Parse the XML to update the location of the TDE (Soapbox: for those of you who think it’s ‘hacking’ XML, please make sure you RTFM).
- Zip it up and send to AWS S3 and get it off your Server machine
Is that a lot of steps? Maybe. But this whole process is automated. Do a little work up front and you save yourself a lot of time down the line, not to mention a lot of space on your machine. Plus, your Server infra keeps humming along without the added load of multiple versions of your content. You also don’t need to worry about (1) versioning and (2) installing tabcmd.
Here’s a sample of SQL you should write to scale to whatever content you’d need to backup and version.
select ds.id as "id" , ds.luid as "luid" , ds.site_id as "site_id" , s.name as "site_name" , s.luid as "site_luid" , case when ds.data_engine_extracts = TRUE THEN lower(ds.repository_url)||'.tdsx' ELSE lower(ds.repository_url)||'.tds' end as "export_name" , ds.data_engine_extracts as "hasExtract" /*, ds.repository_data_id , ds.repository_extract_data_id*/ , ed.descriptor as "tde_path" , rd.content as "OID" from datasources ds left join sites s on s.id = ds.site_id left join extracts ed on ed.datasource_id = ds.id left join repository_data rd on (ds.repository_data_id = rd.id OR ds.repository_extract_data_id = rd.id)
#Data16 may be over and the Server Admin session may have ended but don’t let the fun stop there. Continuing with the recommendation and urgency of making sure you monitor your Tableau/Analytics infrastructure, Logentries and I have teamed up on a Whitepaper regarding all things Alerting, Integrating and Monitoring.
You’ll find a very through analysis of the *why* it’s important to have a strategy in place as well as tips/tricks and recommendations for further reading. What’s more, you’ll find out how easy it is to get a variety of log data back into Tableau for deeper analysis.
So, get the Whitepaper and spend Thanksgiving implementing it. Just kidding. Take a break for Thanksgiving and then do this 🙂
Thanks to all who attended the Server Admin Meetup. For those that could not attend, I’ve attached the slides from the meeting.
If there are any questions, please don’t hesitate to let me know.