I'm sorry to hear about the bad experience you're having. I can tell you definitely that we're neither ignoring any requests nor deleting them. Can you please send me an email at gmantri at cerebrata.com with the problems you're facing and I will personally take a look at those issues.
So this functionality kind of exists in the tool. If you right click on the storage account and then click on "Jump To" context menu item, you will be able to search for tables, queues, blob containers by their name.
Do give this a try and let us know your feedback.
If by cache-control metadata you mean cache-control property of blobs in a container, you can do it today itself. There's no need for you to write custom code for that.
If you wish to change the cache-control property of all blobs in a container, simply right click on the container and then click on "Set Properties of Blobs..." context menu item. In the popup window that gets displayed, simply set the desired value for cache control. Once you click "OK", the application will set the cache-control property of all blobs in that container to the value you have specified.
If you wish to change the cache-control property of selected blobs in a container, you can do that as well. Simply select the blobs (and folders) for which you wish to set the value and then right click and then click on "Set Properties of Blobs..." context menu item. After that you would follow the same procedure as above.
Please let us know if this is something you're looking for.
This would require us to support "Azure Resource Manager (ARM)" functionality. I am pleased to say that we have started working on including this functionality. What we're working on right now is including support for managing ARM storage accounts. This would pave the path to add more features which are built on top of ARM. However this is going to take some time.
May I suggest you take a look at this in our other product - Cloud Portam (http://www.cloudportam.com). We have included support for managing Resource Groups (and then some more there). To see what all features we're supporting there, please visit our website at http://www.cloudportam.com/features/azure-subscription/resource-groups.
If there're any questions, please let me know.
Thank you for making this suggestion. As you may know that it is not straight forward to find storage account size using storage REST API. However it is quite possible to do so using Azure Billing and Usage API. In fact, we have done this in our other product - Cloud Portam (http://www.cloudportam.com). While we figure out how to do this in Azure Management Studio, I would strongly encourage you to try out this functionality there. You can learn more about this feature on Cloud Portam's blog: http://blog.cloudportam.com/cloud-portam-updates-view-azure-storage-accounts-size-change-storage-account-type-manage-storage-account-tags-custom-domain-and-cloud-service-diagnostics-enhancements/.
If there are any questions, please let me know.
The current issue is that there is not an exposed billing API for Azure outside of those that have an Enterprise Account (EA). We will review this further once a universally available billing API is available.
We have this feature built in into our other product - Cloud Portam (http://cloudportam.com/features/azure-subscription/usage-and-billing). This makes use of Azure Usage & Billing API. I would strong encourage you to try it out there.
Following up with submitter. We do support the && operator.
Over this weekend we released a new version of Azure Management Studio which lets you connect to your Azure Subscription using your Azure AD credentials. Currently you can only manage ARM Storage Accounts using this functionality, but more functionality will be added soon. Please update your copy of Azure Management Studio and give it a try. More information about this can be found on our blog at http://blog.cerebrata.com/managing-azure-resource-manager-storage-accounts-through-azure-management-studio/.
The client ID issue that was preventing us from doing this earlier has been resolved from the Microsoft side. This is still on our backlog at the moment.
Henrik: Thanks for your feedback! In some cases the rename can be very slow for a folder based on the number of blobs actually being affected. Sadly, this isn't something we can currently improve due to what is exposed to us from the Storage REST API.
First, remember that a Folder in Azure storage doesn't really exist. They are a blob happens to have a slash "/" in the name. AMS simply makes multiple blobs that happen to share the first part of the names with slashes and displays them as folders to make it more familiar.
The Azure Blob REST API does not expose a rename blob operation. What you have to do is call a Copy operation, and then delete the previous blob. You have to do this with against EACH blob that happens to share the "folder". This is why renaming a folder can be slow in AMS. The copy and delete operation is being fired for you for every blob that needs to be renamed. In fact, there is no SLA on how long a copy might take (though they tend to be very fast in the same account since the data isn't really moving anywhere and a pointer is just being changed), if your "folder" has a ton of files in it the complete operation can take a while.
The Blob storage API changes and new features are added constantly. If a new way to handle renames appears we will revisit this.
Note that you can also switch this without going to the Options page. On the bottom right corner of the container view are two toggle buttons. One for Flat view and one for hierarchy view. We don't have plans on making this a per container setting at this time, but will take this suggestion under advisement.
For the "New" indicator, would you only be interested in what was just added within that session, or would you expect the new indicator to be persisted across sessions for some period of time (or even show up as new if someone else created it using a different method)?
Thanks for the feedback Damiaan. The Service bus messages can currently show you the messages are in the queue (topic support coming soon). The default is to peek at the messages, but you can also change to that to a Get, which would remove them from the queue.
What we don't have out of your suggestions is the ability to delete a specific message (without actually just calling Get), or the editing and resubmit to a different queue. We'll keep those in mind as we plan.
Can you please contact support at firstname.lastname@example.org to let us know which specific UI interactions you are having issues with? Or please update this feedback with more specifics?
AMS allows for you to access table storage using either the ODATA query language or (new this last week) LINQ. By default the LINQ query designer is shown. It does look a little like SQL, but it's LINQ. When you are looking at the designer there is a LINQ Help button you can click on the command bar which will show you some examples and spell out the differences of using LINQ with Table Storage.
Does that help?
Thanks Mike, I'm really glad you added this.
This is something we've been thinking about for sometime. In fact, our recent addition of the activity log is in part to provide a nice place for these interactions to occur.
We were thinking of supporting basic storage tasks, e.g. script backups to PowerShell, and also set up tasks for recreating environments. I'd be interested to learn which three tasks people would most like to see.
We've also been toying with exposing the underlying API calls so that a developer could learn more about the requests being made and see the responses. These could then be wrapped up in code through some simple code generation.
Am I right thinking that you would like the tool to prompt a confirmation dialog before issuing the Reimage command?
Thanks for swift response - I appreciate it!
We'll make sure we fix that in the first point release.